text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Incomputer science, acompiler-compilerorcompiler generatoris a programming tool that creates aparser,interpreter, orcompilerfrom some form of formal description of aprogramming languageand machine.
The most common type of compiler-compiler is called aparser generator.[1]It handles onlysyntactic analysis.
A formal description of a language is usually agrammarused as an input to a parser generator. It often resemblesBackus–Naur form(BNF),extended Backus–Naur form(EBNF), or has its own syntax. Grammar files describe asyntaxof a generated compiler's target programming language and actions that should be taken against its specific constructs.
Source codefor a parser of the programming language is returned as the parser generator's output. This source code can then be compiled into a parser, which may be either standalone or embedded. The compiled parser then accepts the source code of the target programming language as an input and performs an action or outputs anabstract syntax tree(AST).
Parser generators do not handle thesemanticsof the AST, or thegeneration of machine codefor the target machine.[2]
Ametacompileris a software development tool used mainly in the construction ofcompilers,translators, andinterpretersfor other programming languages.[3]The input to a metacompiler is acomputer programwritten in aspecializedprogrammingmetalanguagedesigned mainly for the purpose of constructing compilers.[3][4]The language of the compiler produced is called the object language. The minimal input producing a compiler is ametaprogramspecifying the object language grammar andsemantictransformations into anobject program.[4][5]
A typical parser generator associates executable code with each of the rules of the grammar that should be executed when these rules are applied by the parser. These pieces of code are sometimes referred to as semantic action routines since they define the semantics of the syntactic structure that is analyzed by the parser. Depending upon the type of parser that should be generated, these routines may construct aparse tree(orabstract syntax tree), or generate executable code directly.
One of the earliest (1964), surprisingly powerful, versions of compiler-compilers isMETA II, which accepted an analytical grammar with output facilitiesthat produce stack machinecode, and is able to compile its own source code and other languages.
Among the earliest programs of the originalUnixversions being built atBell Labswas the two-partlexandyaccsystem, which was normally used to outputC programming languagecode, but had a flexible output system that could be used for everything from programming languages to text file conversion. Their modernGNUversions areflexandbison.
Some experimental compiler-compilers take as input a formal description of programming language semantics, typically usingdenotational semantics. This approach is often called 'semantics-based compiling', and was pioneered byPeter Mosses' Semantic Implementation System (SIS) in 1978.[6]However, both the generated compiler and the code it produced were inefficient in time and space. No production compilers are currently built in this way, but research continues.
The Production Quality Compiler-Compiler (PQCC) project atCarnegie Mellon Universitydoes not formalize semantics, but does have a semi-formal framework for machine description.
Compiler-compilers exist in many flavors, including bottom-up rewrite machine generators (seeJBurg) used to tile syntax trees according to a rewrite grammar for code generation, andattribute grammarparser generators (e.g.ANTLRcan be used for simultaneous type checking, constant propagation, and more during the parsing stage).
Metacompilers reduce the task of writing compilers by automating the aspects that are the same regardless of the object language. This makes possible the design ofdomain-specific languageswhich are appropriate to the specification of a particular problem. A metacompiler reduces the cost of producingtranslatorsfor suchdomain-specificobject languages to a point where it becomes economically feasible to include in the solution of a problem adomain-specific languagedesign.[4]
As a metacompiler'smetalanguagewill usually be a powerful string and symbol processing language, they often have strong applications for general-purpose applications, including generating a wide range of other software engineering and analysis tools.[4][7]
Besides being useful fordomain-specific languagedevelopment, a metacompiler is a prime example of a domain-specific language, designed for the domain of compiler writing.
A metacompiler is ametaprogramusually written in its own metalanguageor an existing computer programming language. The process of a metacompiler, written in its own metalanguage, compiling itself is equivalent toself-hosting compiler. Most common compilers written today are self-hosting compilers. Self-hosting is a powerful tool, of many metacompilers, allowing the easy extension of their own metaprogramming metalanguage. The feature that separates a metacompiler apart from other compiler compilers is that it takes as input a specializedmetaprogramminglanguage that describes all aspects of the compiler's operation. A metaprogram produced by a metacompiler is as complete a program as aprogramwritten inC++,BASICor any other generalprogramming language. The metaprogrammingmetalanguageis a powerful attribute allowing easier development of computer programming languages and other computer tools. Command line processors, text string transforming and analysis are easily coded using metaprogramming metalanguages of metacompilers.
A full featured development package includes alinkerand arun timesupportlibrary. Usually, a machine-orientedsystem programming language, such asCor C++, is needed to write the support library. A library consisting of support functions needed for the compiling process usually completes the full metacompiler package.
In computer science, the prefixmetais commonly used to meanabout (its own category). For example,metadataare data that describe other data. A language that is used to describe other languages is ametalanguage. Meta may also meanon a higher level of abstraction. Ametalanguageoperates on a higher level of abstraction in order to describe properties of a language.Backus–Naur form(BNF) is a formalmetalanguageoriginally used to defineALGOL 60. BNF is a weakmetalanguage, for it describes only thesyntaxand says nothing about thesemanticsor meaning. Metaprogramming is the writing ofcomputer programswith the ability to treatprogramsas their data. A metacompiler takes as input ametaprogramwritten in aspecialized metalanguages(a higher level abstraction) specifically designed for the purpose of metaprogramming.[4][5]The output is an executable object program.
An analogy can be drawn: That as aC++compiler takes as input aC++programming language program, ametacompiler takes as input ametaprogrammingmetalanguageprogram.
Many advocates of the languageForthcall the process of creating a new implementation of Forth a meta-compilation and that it constitutes a metacompiler. The Forth definition of metacompiler is:
This Forth use of the term metacompiler is disputed in mainstream computer science. SeeForth (programming language)andHistory of compiler construction. The actual Forth process of compiling itself is a combination of a Forth being aself-hostingextensible programminglanguage and sometimescross compilation, long established terminology in computer science. Metacompilers are a general compiler writing system. Besides the Forth metacompiler concept being indistinguishable from self-hosting and extensible language. The actual process acts at a lower level defining a minimum subset of forthwords, that can be used to define additional forth words, A full Forth implementation can then be defined from the base set. This sounds like a bootstrap process. The problem is that almost every general purpose language compiler also fits the Forth metacompiler description.
Just replace X with any common language, C, C++,Java,Pascal,COBOL,Fortran,Ada,Modula-2, etc. And X would be a metacompiler according to the Forth usage of metacompiler. A metacompiler operates at an abstraction level above the compiler it compiles. It only operates at the same (self-hosting compiler) level when compiling itself. One has to see the problem with this definition of metacompiler. It can be applied to most any language.
However, on examining the concept of programming in Forth, adding new words to the dictionary, extending the language in this way is metaprogramming. It is this metaprogramming in Forth that makes it a metacompiler.
Programming in Forth is adding new words to the language. Changing the language in this way ismetaprogramming. Forth is a metacompiler, because Forth is a language specifically designed for metaprogramming. Programming in Forth is extending Forth adding words to the Forth vocabulary creates a new Forthdialect. Forth is a specialized metacompiler for Forth language dialects.
Design of the original compiler-compiler was started byTony Brookerand Derrick Morris in 1959, with initial testing beginning in March 1962.[8]The Brooker Morris Compiler Compiler (BMCC) was used to create compilers for the newAtlascomputer at theUniversity of Manchester, for several languages:Mercury Autocode, Extended Mercury Autocode,Atlas Autocode,ALGOL 60and ASAFortran. At roughly the same time, related work was being done by E. T. (Ned) Irons at Princeton, and Alick Glennie at the Atomic Weapons Research Establishment at Aldermaston whose "Syntax Machine" paper (declassified in 1977) inspired the META series of translator writing systems mentioned below.
The early history of metacompilers is closely tied with the history of SIG/PLAN Working group 1 on Syntax Driven Compilers. The group was started primarily through the effort of Howard Metcalfe in the Los Angeles area.[9]In the fall of 1962, Howard Metcalfe designed two compiler-writing interpreters. One used a bottom-to-top analysis technique based on a method described by Ledley and Wilson.[10]The other used a top-to-bottom approach based on work by Glennie to generate random English sentences from a context-free grammar.[11]
At the same time, Val Schorre described two "meta machines", one generative and one analytic. The generative machine was implemented and produced random algebraic expressions. Meta I the first metacompiler was implemented by Schorre on an IBM 1401 at UCLA in January 1963. His original interpreters and metamachines were written directly in a pseudo-machine language.META II, however, was written in a higher-level metalanguage able to describe its own compilation into the pseudo-machine language.[12][13][14]
Lee Schmidt at Bolt, Beranek, and Newman wrote a metacompiler in March 1963 that utilized a CRT display on the time-sharing PDP-l.[15]This compiler produced actual machine code rather than interpretive code and was partially bootstrapped from Meta I.[citation needed]
Schorre bootstrapped Meta II from Meta I during the spring of 1963. The paper on the refined metacompiler system presented at the 1964 Philadelphia ACM conference is the first paper on a metacompiler available as a general reference. The syntax and implementation technique of Schorre's system laid the foundation for most of the systems that followed. The system was implemented on a small 1401, and was used to implement a smallALGOL-like language.[citation needed]
Many similar systems immediately followed.[citation needed]
Roger Rutman ofAC Delcodeveloped and implemented LOGIK, a language for logical design simulation, on the IBM 7090 in January 1964.[16]This compiler used an algorithm that produced efficient code for Boolean expressions.[citation needed]
Another paper in the 1964 ACM proceedings describesMeta III, developed bySchneiderand Johnson at UCLA for the IBM 7090.[17]Meta III represents an attempt to produce efficient machine code, for a large class of languages. Meta III was implemented completely in assembly language. Two compilers were written in Meta III, CODOL, a compiler-writing demonstration compiler, and PUREGOL, a dialect of ALGOL 60. (It was pure gall to call it ALGOL).
Late in 1964, Lee Schmidt bootstrapped the metacompiler EQGEN, from the PDP-l to the Beckman 420. EQGEN was a logic equation generating language.
In 1964, System Development Corporation began a major effort in the development of metacompilers. This effort includes powerful metacompilers, Bookl, and Book2 written inLispwhich have extensive tree-searching and backup ability. An outgrowth of one of theQ-32systems at SDC is Meta 5.[18]The Meta 5 system incorporates backup of the input stream and enough other facilities to parse any context-sensitive language. This system was successfully released to a wide number of users and had many string-manipulation applications other than compiling. It has many elaborate push-down stacks, attribute setting and testing facilities, and output mechanisms. That Meta 5 successfully translatesJOVIALprograms toPL/Iprograms demonstrates its power and flexibility.
Robert McClure atTexas Instrumentsinvented a compiler-compiler calledTMG(presented in 1965). TMG was used to create early compilers for programming languages likeB,PL/IandALTRAN. Together with metacompiler of Val Schorre, it was an early inspiration for the last chapter ofDonald Knuth'sThe Art of Computer Programming.[19]
The LOT system was developed during 1966 at Stanford Research Institute and was modeled very closely after Meta II.[20]It had new special-purpose constructs allowing it to generate a compiler which could in turn, compile a subset of PL/I. This system had extensive statistic-gathering facilities and was used to study the characteristics of top-down analysis.
SIMPLE is a specialized translator system designed to aid the writing of pre-processors for PL/I, SIMPLE, written in PL/I, is composed of three components: An executive, a syntax analyzer and a semantic constructor.[21]
TheTREE-METAcompiler was developed at Stanford Research Institute in Menlo Park, California. April 1968. The early metacompiler history is well documented in theTREE META manual.TREE META paralleled some of the SDC developments. Unlike earlier metacompilers it separated the semantics processing from the syntax processing. The syntax rules containedtreebuilding operations that combined recognized language elements with tree nodes. The tree structure representation of the input was then processed by a simple form of unparse rules. The unparse rules used node recognition and attribute testing that when matched resulted in the associated action being performed. In addition like tree element could also be tested in an unparse rule. Unparse rules were also a recursive language being able to call unparse rules passing elements of thee tree before the action of the unparse rule was performed.
The concept of the metamachine originally put forth by Glennie is so simple that three hardware versions have been designed and one actually implemented. The latter at Washington University in St. Louis. This machine was built from macro-modular components and has for instructions the codes described by Schorre.
CWIC (Compiler for Writing and Implementing Compilers) is the last known Schorre metacompiler. It was developed at Systems Development Corporation by Erwin Book, Dewey Val Schorre and Steven J. Sherman With the full power of (lisp 2) a list processing language optimizing algorithms could operate on syntax generated lists and trees before code generation. CWIC also had a symbol table built into the language.
With the resurgence of domain-specific languages and the need for parser generators which are easy to use, easy to understand, and easy to maintain, metacompilers are becoming a valuable tool for advanced software engineering projects.
Other examples of parser generators in the yacc vein areANTLR,Coco/R,[22]CUP,[citation needed]GNU Bison, Eli,[23]FSL,[citation needed]SableCC, SID (Syntax Improving Device),[24]andJavaCC. While useful, pure parser generators only address the parsing part of the problem of building a compiler. Tools with broader scope, such asPQCC,Coco/RandDMS Software Reengineering Toolkitprovide considerable support for more difficult post-parsing activities such as semantic analysis, code optimization and generation.
The earliest Schorre metacompilers, META I and META II, were developed by D. Val Schorre at UCLA. Other Schorre based metacompilers followed. Each adding improvements to language analysis and/or code generation.
In programming it is common to use the programming language name to refer to both the compiler and the programming language, the context distinguishing the meaning. A C++ program is compiled using a C++ compiler. That also applies in the following. For example, META II is both the compiler and the language.
The metalanguages in the Schorre line of metacompilers are functional programming languages that use top down grammar analyzing syntax equations having embedded output transformation constructs.
A syntax equation:
is a compiledtestfunction returningsuccessorfailure. <name> is the function name. <body> is a form of logical expression consisting of tests that may be grouped, have alternates, and output productions. Atestis like aboolin other languages,successbeingtrueandfailurebeingfalse.
Defining a programming language analytically top down is natural. For example, a program could be defined as:
Defining a program as a sequence of zero or more declaration(s).
In the Schorre METAXlanguages there is a driving rule. The program rule above is an example of a driving rule. The program rule is atestfunction that calls declaration, atestrule, that returnssuccessorfailure. The $ loop operator repeatedly calling declaration untilfailureis returned. The $ operator is always successful, even when there are zero declaration. Above program would always return success. (In CWIC a long fail can bypass declaration. A long-fail is part of the backtracking system of CWIC)
The character sets of these early compilers were limited. The character/was used for the alternant (or) operator. "A or B" is written as A / B. Parentheses ( ) are used for grouping.
Describes a construct of A followed by B or C. As a boolean expression it would be
A sequence X Y has an implied XandY meaning.( )are grouping and/theoroperator. The order of evaluation is always left to right as an input character sequence is being specified by the ordering of the tests.
Special operator words whose first character is a "." are used for clarity. .EMPTY is used as the last alternate when no previous alternant need be present.
Indicates that X is optionally followed by AorB. This is a specific characteristic of these metalanguages being programming languages. Backtracking is avoided by the above. Other compiler constructor systems may have declared the three possible sequences and left it up to the parser to figure it out.
The characteristics of the metaprogramming metalanguages above are common to all Schorre metacompilers and those derived from them.
META I was a hand compiled metacompiler used to compile META II. Little else is known of META I except that the initial compilation of META II produced nearly identical code to that of the hand coded META I compiler.
Each rule consists optionally of tests, operators, and output productions. A rule attempts to match some part of the input program source character stream returning success or failure. On success the input is advanced over matched characters. On failure the input is not advanced.
Output productions produced a form of assembly code directly from a syntax rule.
TREE-META introduced tree building operators:<node_name> and[<number>]moving the output production transforms to unparsed rules. The tree building operators were used in the grammar rules directly transforming the input into anabstract syntax tree. Unparse rules are also test functions that matched tree patterns. Unparse rules are called from a grammar rule when an abstract syntax tree is to be transformed into output code. The building of an abstract syntax tree and unparse rules allowed local optimizations to be performed by analyzing the parse tree.
Moving of output productions to the unparse rules made a clear separation of grammar analysis and code production. This made the programming easier to read and understand.
In 1968–1970, Erwin Book, Dewey Val Schorre, and Steven J. Sherman developed CWIC.[4](Compiler for Writing and Implementing Compilers) atSystem Development CorporationCharles Babbage Institute Center for the History of Information Technology (Box 12, folder 21),
CWIC is a compiler development system composed of three special-purpose, domain specific, languages, each intended to permit the description of certain aspects of translation in a straight forward manner. The syntax language is used to describe the recognition of source text and the construction from it to an intermediatetreestructure. The generator language is used to describe the transformation of the tree into appropriate object language.
The syntax language follows Dewey Val Schorre's previous line of metacompilers. It most resembles TREE-META havingtreebuilding operators in the syntax language. The unparse rules of TREE-META are extended to work with the object based generator language based onLISP 2.
CWIC includes three languages:
Generators Language had semantics similar toLisp. The parsetreewas thought of as a recursive list. The general form of a Generator Language function is:
The code to process a giventreeincluded the features of a general purpose programming language, plus a form: <stuff>, which would emit (stuff) onto the output file.
A generator call may be used in the unparse_rule. The generator is passed the element of unparse_rule pattern in which it is placed and its return values are listed in (). For example:
That is, if the parsetreelooks like (ADD[<something1>,<something2>]), expr_gen(x) would be called with <something1> and return x. A variable in the unparse rule is a local variable that can be used in the production_code_generator. expr_gen(y) is called with <something2> and returns y. Here is a generator call in an unparse rule is passed the element in the position it occupies. Hopefully in the above x and y will be registers on return. The last transforms is intended to load an atomic into a register and return the register. The first production would be used to generate the 360 "AR" (Add Register) instruction with the appropriate values in general registers. The above example is only a part of a generator. Every generator expression evaluates to a value that con then be further processed. The last transform could just as well have been written as:
In this case load returns its first parameter, the register returned by getreg(). the functions load and getreg are other CWIC generators.
From the authors of CWIC:
"A metacompiler assists the task of compiler-building by automating its non creative aspects, those aspects that are the same regardless of the language which the produced compiler is to translate. This makes possible the design of languages which are appropriate to the specification of a particular problem. It reduces the cost of producing processors for such languages to a point where it becomes economically feasible to begin the solution of a problem with language design."[4]
|
https://en.wikipedia.org/wiki/Parser_generator
|
Parsing,syntax analysis, orsyntactic analysisis a process of analyzing astringofsymbols, either innatural language,computer languagesordata structures, conforming to the rules of aformal grammarby breaking it into parts. The termparsingcomes from Latinpars(orationis), meaningpart (of speech).[1]
The term has slightly different meanings in different branches oflinguisticsandcomputer science. Traditionalsentenceparsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such assentence diagrams. It usually emphasizes the importance of grammatical divisions such assubjectandpredicate.
Withincomputational linguisticsthe term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in aparse treeshowing their syntactic relation to each other, which may also containsemanticinformation.[citation needed]Some parsing algorithms generate aparse forestor list of parse trees from a string that issyntactically ambiguous.[2]
The term is also used inpsycholinguisticswhen describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc."[1]This term is especially common when discussing which linguistic cues help speakers interpretgarden-path sentences.
Within computer science, the term is used in the analysis ofcomputer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing ofcompilersandinterpreters. The term may also be used to describe a split or separation.
In data analysis, the term is often used to refer to a process extracting desired information from data, e.g., creating atime seriessignal from aXMLdocument.
The traditional grammatical exercise of parsing, sometimes known asclause analysis, involves breaking down a text into its componentparts of speechwith an explanation of the form, function, and syntactic relationship of each part.[3]This is determined in large part from study of the language'sconjugationsanddeclensions, which can be quite intricate for heavilyinflectedlanguages. To parse a phrase such as "man bites dog" involves noting that thesingularnoun "man" is thesubjectof the sentence, the verb "bites" is thethird person singularof thepresent tenseof the verb "to bite", and the singular noun "dog" is theobjectof the sentence. Techniques such assentence diagramsare sometimes used to indicate relation between elements in the sentence.
Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language.[citation needed]
In somemachine translationandnatural language processingsystems, written texts in human languages are parsed by computer programs.[4]Human sentences are not easily parsed by programs, as there is substantialambiguityin the structure of human language, whose usage is to convey meaning (orsemantics) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case.[5]So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.[citation needed]
In order to parse natural language data, researchers must first agree on thegrammarto be used. The choice of syntax is affected by bothlinguisticand computational concerns; for instance some parsing systems uselexical functional grammar, but in general, parsing for grammars of this type is known to beNP-complete.Head-driven phrase structure grammaris another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the PennTreebank.Shallow parsingaims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy isdependency grammarparsing.
Most modern parsers are at least partly statistical; that is, they rely on acorpusof training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts.(Seemachine learning.)Approaches which have been used include straightforwardPCFGs(probabilistic context-free grammars),[6]maximum entropy,[7]andneural nets.[8]Most of the more successful systems uselexicalstatistics (that is, they consider the identities of the words involved, as well as theirpart of speech). However such systems are vulnerable tooverfittingand require some kind ofsmoothingto be effective.[citation needed]
Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is notcontext-free, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of theCYK algorithm, usually with someheuristicto prune away unlikely analyses to save time.(Seechart parsing.)However some systems trade speed for accuracy using, e.g., linear-time versions of theshift-reducealgorithm. A somewhat recent development has beenparse rerankingin which the parser proposes some large number of analyses, and a more complex system selects the best option.[citation needed]Innatural language understandingapplications,semantic parsersconvert the text into a representation of its meaning.[9]
Inpsycholinguistics, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known asconnotation). This normally occurs as words are being heard or read.
Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the function of sentence parsing.[10]This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences.
The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound. Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech.[11]For example, in the sentence, "the horse raced past the barn fell", raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase.[12]Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader.
Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. "The boy saw the lady with the telescope", in which the ambiguous phrase with the telescope could modify the boy saw or the lady.)[11]
A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. "The rat the cat the man hit chased ran into the trap".) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging for mental parsing, again because of ambiguity of syntactic relationship.[13]
Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected.[14]The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated.[15]
Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood.[16]
Discourse analysisexamines ways to analyze language use and semiotic events. Persuasive language may be calledrhetoric.
Aparseris a software component that takes input data (typically text) and builds adata structure– often some kind ofparse tree,abstract syntax treeor other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separatelexical analyser, which creates tokens from the sequence of input characters; alternatively, these can be combined inscannerless parsing. Parsers may be programmed by hand or may be automatically or semi-automatically generated by aparser generator. Parsing is complementary totemplating, which produces formattedoutput.These may be applied to different domains, but often appear together, such as thescanf/printfpair, or the input (front end parsing) and output (back end code generation) stages of acompiler.
The input to a parser is typically text in somecomputer language, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such asscanf, to complex programs such as the frontend of aC++ compileror theHTMLparser of aweb browser. An important class of simple parsing is done usingregular expressions, in which a group of regular expressions defines aregular languageand a regular expression engine automatically generating a parser for that language, allowingpattern matchingand extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser.
The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML orXMLtext; these examples aremarkup languages. In the case ofprogramming languages, a parser is a component of acompilerorinterpreter, which parses thesource codeof acomputer programming languageto create some form of internal representation; the parser is a key step in thecompiler frontend. Programming languages tend to be specified in terms of adeterministic context-free grammarbecause fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – seeone-pass compilerandmulti-pass compiler.
The implied disadvantages of a one-pass compiler can largely be overcome by addingfix-ups, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known.
Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at thesemantic analysis(contextual analysis) step.
For example, inPythonthe following is syntactically valid code:
The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use:
The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic.
The first stage is the token generation, orlexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar ofregular expressions. For example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it into the tokens12,*,(,3,+,4,),^,2, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters*,+,^,(and)mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated.
The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to acontext-free grammarwhich recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed withattribute grammars.
The final phase issemantic parsingor analysis, which is working out the implications of the expression just validated and taking the appropriate action.[17]In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars can also be used to define these actions.
Thetaskof the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways:
LL parsersandrecursive-descent parserare examples of top-down parsers that cannot accommodateleft recursiveproduction rules. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsingambiguous context-free grammars, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan[20][21]which accommodateambiguityandleft recursionin polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a givencontext-free grammar.
An important distinction with regard to parsers is whether a parser generates aleftmost derivationor arightmost derivation(seecontext-free grammar). LL parsers will generate a leftmostderivationand LR parsers will generate a rightmost derivation (although usually in reverse).[18]
Somegraphical parsingalgorithms have been designed forvisual programming languages.[22][23]Parsers for visual languages are sometimes based ongraph grammars.[24]
Adaptive parsingalgorithms have been used to construct "self-extending"natural language user interfaces.[25]
A simple parser implementation reads the entire input file, performs an intermediate computation or translation, and then writes the entire output file,
such as in-memorymulti-pass compilers.
Alternative parser implementation approaches:
Some of the well known parser development tools include the following:
Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant toLL,LR, andLALR parsers, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1).
Mostprogramming languages, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change[citation needed]to this trend came in 1990 whenTerence ParrcreatedANTLRfor his Ph.D. thesis, aparser generatorfor efficient LL(k) parsers, wherekis any fixed value.
LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce).
Lookahead has two advantages.[clarification needed]
Example: Parsing the Expression1 + 2 * 3[dubious–discuss]
Most programming languages (except for a few such as APL and Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is1 + (2 * 3).
Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax.
Initially Input = [1, +, 2, *, 3]
The parse tree and resulting code from it is not correct according to language semantics.
To correctly parse without lookahead, there are three solutions:
The parse tree generated is correct and simplymore efficient[clarify][citation needed]than non-lookahead parsers. This is the strategy followed inLALR parsers.
|
https://en.wikipedia.org/wiki/Lookahead_(parsing)
|
Lexical tokenizationis conversion of a text into (semantically or syntactically) meaningfullexical tokensbelonging to categories defined by a "lexer" program. In case of a natural language, those categories include nouns, verbs, adjectives, punctuations etc. In case of a programming language, the categories includeidentifiers,operators,grouping symbols,data typesand language keywords. Lexical tokenization is related to the type of tokenization used inlarge language models(LLMs) but with two differences. First, lexical tokenization is usually based on alexical grammar, whereas LLM tokenizers are usuallyprobability-based. Second, LLM tokenizers perform a second step that converts the tokens into numerical values.
A rule-based program, performing lexical tokenization, is calledtokenizer,[1]orscanner, althoughscanneris also a term for the first stage of a lexer. A lexer forms the first phase of acompiler frontendin processing. Analysis generally occurs in one pass. Lexers and parsers are most often used for compilers, but can be used for other computer language tools, such asprettyprintersorlinters. Lexing can be divided into two stages: thescanning, which segments the input string into syntactic units calledlexemesand categorizes these into token classes, and theevaluating, which converts lexemes into processed values.
Lexers are generally quite simple, with most of the complexity deferred to thesyntactic analysisorsemantic analysisphases, and can often be generated by alexer generator, notablylexor derivatives. However, lexers can sometimes include some complexity, such asphrase structureprocessing to make input easier and simplify the parser, and may be written partly or fully by hand, either to support more features or for performance.
What is called "lexeme" in rule-basednatural language processingis not equal to what is calledlexemein linguistics. What is called "lexeme" in rule-based natural language processing can be equal to the linguistic equivalent only inanalytic languages, such as English, but not in highlysynthetic languages, such asfusional languages. What is called a lexeme in rule-based natural language processing is more similar to what is called awordin linguistics (not to be confused with aword in computer architecture), although in some cases it may be more similar to amorpheme.
Alexical tokenis astringwith an assigned and thus identified meaning, in contrast to the probabilistic token used inlarge language models. A lexical token consists of atoken nameand an optionaltoken value. The token name is a category of a rule-based lexical unit.[2]
(Lexical category)
Consider this expression in theCprogramming language:
The lexical analysis of this expression yields the following sequence of tokens:
A token name is what might be termed apart of speechin linguistics.
Lexical tokenizationis the conversion of a raw text into (semantically or syntactically) meaningful lexical tokens, belonging to categories defined by a "lexer" program, such as identifiers, operators, grouping symbols, and data types. The resulting tokens are then passed on to some other form of processing. The process can be considered a sub-task ofparsinginput.
For example, in the textstring:
the string is not implicitly segmented on spaces, as anatural languagespeaker would do. The raw input, the 43 characters, must be explicitly split into the 9 tokens with a given space delimiter (i.e., matching the string" "orregular expression/\s{1}/).
When a token class represents more than one possible lexeme, the lexer often saves enough information to reproduce the original lexeme, so that it can be used insemantic analysis. The parser typically retrieves this information from the lexer and stores it in theabstract syntax tree. This is necessary in order to avoid information loss in the case where numbers may also be valid identifiers.
Tokens are identified based on the specific rules of the lexer. Some methods used to identify tokens includeregular expressions, specific sequences of characters termed aflag, specific separating characters calleddelimiters, and explicit definition by a dictionary. Special characters, including punctuation characters, are commonly used by lexers to identify tokens because of their natural use in written and programming languages. A lexical analyzer generally does nothing with combinations of tokens, a task left for aparser. For example, a typical lexical analyzer recognizes parentheses as tokens but does nothing to ensure that each "(" is matched with a ")".
When a lexer feeds tokens to the parser, the representation used is typically anenumerated type, which is a list of number representations. For example, "Identifier" can be represented with 0, "Assignment operator" with 1, "Addition operator" with 2, etc.
Tokens are often defined byregular expressions, which are understood by a lexical analyzer generator such aslex, or handcoded equivalentfinite-state automata. The lexical analyzer (generated automatically by a tool like lex or hand-crafted) reads in a stream of characters, identifies thelexemesin the stream, and categorizes them into tokens. This is termedtokenizing. If the lexer finds an invalid token, it will report an error.
Following tokenizing isparsing. From there, the interpreted data may be loaded into data structures for general use, interpretation, orcompiling.
The specification of aprogramming languageoften includes a set of rules, thelexical grammar, which defines the lexical syntax. The lexical syntax is usually aregular language, with the grammar rules consisting ofregular expressions; they define the set of possible character sequences (lexemes) of a token. A lexer recognizes strings, and for each kind of string found, the lexical program takes an action, most simply producing a token.
Two important common lexical categories arewhite spaceandcomments. These are also defined in the grammar and processed by the lexer but may be discarded (not producing any tokens) and considerednon-significant, at most separating two tokens (as inif xinstead ofifx). There are two important exceptions to this. First, inoff-side rulelanguages that delimitblockswith indenting, initial whitespace is significant, as it determines block structure, and is generally handled at the lexer level; seephrase structure, below. Secondly, in some uses of lexers, comments and whitespace must be preserved – for examples, aprettyprinteralso needs to output the comments and some debugging tools may provide messages to the programmer showing the original source code. In the 1960s, notably forALGOL, whitespace and comments were eliminated as part of theline reconstructionphase (the initial phase of thecompiler frontend), but this separate phase has been eliminated and these are now handled by the lexer.
The first stage, thescanner, is usually based on afinite-state machine(FSM). It has encoded within it information on the possible sequences of characters that can be contained within any of the tokens it handles (individual instances of these character sequences are termedlexemes). For example, anintegerlexeme may contain any sequence ofnumerical digitcharacters. In many cases, the first non-whitespace character can be used to deduce the kind of token that follows and subsequent input characters are then processed one at a time until reaching a character that is not in the set of characters acceptable for that token (this is termed themaximal munch, orlongest match, rule). In some languages, the lexeme creation rules are more complex and may involvebacktrackingover previously read characters. For example, in C, one 'L' character is not enough to distinguish between an identifier that begins with 'L' and a wide-character string literal.
Alexeme, however, is only a string of characters known to be of a certain kind (e.g., a string literal, a sequence of letters). In order to construct a token, the lexical analyzer needs a second stage, theevaluator, which goes over the characters of the lexeme to produce avalue. The lexeme's type combined with its value is what properly constitutes a token, which can be given to a parser. Some tokens such as parentheses do not really have values, and so the evaluator function for these can return nothing: Only the type is needed. Similarly, sometimes evaluators can suppress a lexeme entirely, concealing it from the parser, which is useful for whitespace and comments. The evaluators for identifiers are usually simple (literally representing the identifier), but may include someunstropping. The evaluators forinteger literalsmay pass the string on (deferring evaluation to the semantic analysis phase), or may perform evaluation themselves, which can be involved for different bases or floating point numbers. For a simple quoted string literal, the evaluator needs to remove only the quotes, but the evaluator for anescaped string literalincorporates a lexer, which unescapes the escape sequences.
For example, in the source code of a computer program, the string
might be converted into the following lexical token stream; whitespace is suppressed and special characters have no value:
Lexers may be written by hand. This is practical if the list of tokens is small, but lexers generated by automated tooling as part of acompiler-compilertoolchainare more practical for a larger number of potential tokens. These tools generally accept regular expressions that describe the tokens allowed in the input stream. Each regular expression is associated with aproduction rulein the lexical grammar of the programming language that evaluates the lexemes matching the regular expression. These tools may generate source code that can be compiled and executed or construct astate transition tablefor afinite-state machine(which is plugged into template code for compiling and executing).
Regular expressions compactly represent patterns that the characters in lexemes might follow. For example, for anEnglish-based language, an IDENTIFIER token might be any English alphabetic character or an underscore, followed by any number of instances of ASCII alphanumeric characters and/or underscores. This could be represented compactly by the string[a-zA-Z_][a-zA-Z_0-9]*. This means "any character a-z, A-Z or _, followed by 0 or more of a-z, A-Z, _ or 0-9".
Regular expressions and the finite-state machines they generate are not powerful enough to handle recursive patterns, such as "nopening parentheses, followed by a statement, followed bynclosing parentheses." They are unable to keep count, and verify thatnis the same on both sides, unless a finite set of permissible values exists forn. It takes a full parser to recognize such patterns in their full generality. A parser can push parentheses on a stack and then try to pop them off and see if the stack is empty at the end (see example[3]in theStructure and Interpretation of Computer Programsbook).
Typically, lexical tokenization occurs at the word level. However, it is sometimes difficult to define what is meant by a "word". Often, a tokenizer relies on simple heuristics, for example:
In languages that use inter-word spaces (such as most that use the Latin alphabet, and most programming languages), this approach is fairly straightforward. However, even here there are many edge cases such ascontractions,hyphenatedwords,emoticons, and larger constructs such asURIs(which for some purposes may count as single tokens). A classic example is "New York-based", which a naive tokenizer may break at the space even though the better break is (arguably) at the hyphen.
Tokenization is particularly difficult for languages written inscriptio continua, which exhibit no word boundaries, such asAncient Greek,Chinese,[4]orThai.Agglutinative languages, such as Korean, also make tokenization tasks complicated.
Some ways to address the more difficult problems include developing more complex heuristics, querying a table of common special cases, or fitting the tokens to alanguage modelthat identifies collocations in a later processing step.
Lexers are often generated by alexer generator, analogous toparser generators, and such tools often come together. The most established islex, paired with theyaccparser generator, or rather some of their many reimplementations, likeflex(often paired withGNU Bison). These generators are a form ofdomain-specific language, taking in a lexical specification – generally regular expressions with some markup – and emitting a lexer.
These tools yield very fast development, which is very important in early development, both to get a working lexer and because a language specification may change often. Further, they often provide advanced features, such as pre- and post-conditions which are hard to program by hand. However, an automatically generated lexer may lack flexibility, and thus may require some manual modification, or an all-manually written lexer.
Lexer performance is a concern, and optimizing is worthwhile, more so in stable languages where the lexer runs very often (such as C or HTML). lex/flex-generated lexers are reasonably fast, but improvements of two to three times are possible using more tuned generators. Hand-written lexers are sometimes used, but modern lexer generators produce faster lexers than most hand-coded ones. The lex/flex family of generators uses a table-driven approach which is much less efficient than the directly coded approach.[dubious–discuss]With the latter approach the generator produces an engine that directly jumps to follow-up states via goto statements. Tools likere2c[5]have proven to produce engines that are between two and three times faster than flex produced engines.[citation needed]It is in general difficult to hand-write analyzers that perform better than engines generated by these latter tools.
Lexical analysis mainly segments the input stream of characters into tokens, simply grouping the characters into pieces and categorizing them. However, the lexing may be significantly more complex; most simply, lexers may omit tokens or insert added tokens. Omitting tokens, notably whitespace and comments, is very common when these are not needed by the compiler. Less commonly, added tokens may be inserted. This is done mainly to group tokens intostatements, or statements into blocks, to simplify the parser.
Line continuationis a feature of some languages where a newline is normally a statement terminator. Most often, ending a line with a backslash (immediately followed by anewline) results in the line beingcontinued– the following line isjoinedto the prior line. This is generally done in the lexer: The backslash and newline are discarded, rather than the newline being tokenized. Examples includebash,[6]other shell scripts and Python.[7]
Many languages use the semicolon as a statement terminator. Most often this is mandatory, but in some languages the semicolon is optional in many contexts. This is mainly done at the lexer level, where the lexer outputs a semicolon into the token stream, despite one not being present in the input character stream, and is termedsemicolon insertionorautomatic semicolon insertion. In these cases, semicolons are part of the formal phrase grammar of the language, but may not be found in input text, as they can be inserted by the lexer. Optional semicolons or other terminators or separators are also sometimes handled at the parser level, notably in the case oftrailing commasor semicolons.
Semicolon insertion is a feature ofBCPLand its distant descendantGo,[8]though it is absent in B or C.[9]Semicolon insertion is present inJavaScript, though the rules are somewhat complex and much-criticized; to avoid bugs, some recommend always using semicolons, while others use initial semicolons, termeddefensive semicolons, at the start of potentially ambiguous statements.
Semicolon insertion (in languages with semicolon-terminated statements) and line continuation (in languages with newline-terminated statements) can be seen as complementary: Semicolon insertion adds a token even though newlines generally donotgenerate tokens, while line continuation prevents a token from being generated even though newlines generallydogenerate tokens.
Theoff-side rule(blocks determined by indenting) can be implemented in the lexer, as inPython, where increasing the indenting results in the lexer emitting an INDENT token and decreasing the indenting results in the lexer emitting one or more DEDENT tokens.[10]These tokens correspond to the opening brace{and closing brace}in languages that use braces for blocks and means that the phrase grammar does not depend on whether braces or indenting are used. This requires that the lexer hold state, namely a stack of indent levels, and thus can detect changes in indenting when this changes, and thus the lexical grammar is notcontext-free: INDENT–DEDENT depend on the contextual information of prior indent levels.
Generally lexical grammars are context-free, or almost so, and thus require no looking back or ahead, or backtracking, which allows a simple, clean, and efficient implementation. This also allows simple one-way communication from lexer to parser, without needing any information flowing back to the lexer.
There are exceptions, however. Simple examples include semicolon insertion in Go, which requires looking back one token; concatenation of consecutive string literals in Python,[7]which requires holding one token in a buffer before emitting it (to see if the next token is another string literal); and the off-side rule in Python, which requires maintaining a count of indent level (indeed, a stack of each indent level). These examples all only require lexical context, and while they complicate a lexer somewhat, they are invisible to the parser and later phases.
A more complex example isthe lexer hackin C, where the token class of a sequence of characters cannot be determined until the semantic analysis phase sincetypedefnames and variable names are lexically identical but constitute different token classes. Thus in the hack, the lexer calls the semantic analyzer (say, symbol table) and checks if the sequence requires a typedef name. In this case, information must flow back not from the parser only, but from the semantic analyzer back to the lexer, which complicates design.
|
https://en.wikipedia.org/wiki/Token_scanner
|
Aword listis a list of words in alexicon, generally sorted by frequency of occurrence (either bygraded levels, or as a ranked list). A word list is compiled bylexical frequency analysiswithin a giventext corpus, and is used incorpus linguisticsto investigate genealogies and evolution of languages and texts. A word which appears only once in the corpus is called ahapax legomena. Inpedagogy, word lists are used incurriculum designforvocabulary acquisition. A lexicon sorted by frequency "provides a rational basis for making sure that learners get the best return for their vocabulary learning effort" (Nation 1997), but is mainly intended forcourse writers, not directly for learners. Frequency lists are also made for lexicographical purposes, serving as a sort ofchecklistto ensure that common words are not left out. Some major pitfalls are the corpus content, the corpusregister, and the definition of "word". While word counting is a thousand years old, with still gigantic analysis done by hand in the mid-20th century,natural language electronic processingof large corpora such as movie subtitles (SUBTLEX megastudy) has accelerated the research field.
Incomputational linguistics, afrequency listis a sorted list ofwords(word types) together with theirfrequency, where frequency here usually means the number of occurrences in a givencorpus, from which the rank can be derived as the position in the list.
Nation (Nation 1997) noted the incredible help provided by computing capabilities, making corpus analysis much easier. He cited several key issues which influence the construction of frequency lists:
Most of currently available studies are based on writtentext corpus, more easily available and easy to process.
However,New et al. 2007proposed to tap into the large number of subtitles available online to analyse large numbers of speeches.Brysbaert & New 2009made a long critical evaluation of the traditional textual analysis approach, and support a move toward speech analysis and analysis of film subtitles available online. The initial research saw a handful of follow-up studies,[1]providing valuable frequency count analysis for various languages. In depth SUBTLEX researches over cleaned up open subtitles were produce for French (New et al. 2007), American English (Brysbaert & New 2009;Brysbaert, New & Keuleers 2012), Dutch (Keuleers & New 2010), Chinese (Cai & Brysbaert 2010), Spanish (Cuetos et al. 2011), Greek (Dimitropoulou et al. 2010), Vietnamese (Pham, Bolger & Baayen 2011), Brazil Portuguese (Tang 2012) and Portugal Portuguese (Soares et al. 2015), Albanian (Avdyli & Cuetos 2013), Polish (Mandera et al. 2014) and Catalan (2019[2]), Welsh (Van Veuhen et al. 2024[3]). SUBTLEX-IT (2015) provides raw data only.[4]
In any case, the basic "word" unit should be defined. For Latin scripts, words are usually one or several characters separated either by spaces or punctuation. But exceptions can arise : English "can't" and French "aujourd'hui" include punctuations while French "chateau d'eau" designs a concept different from the simple addition of its components while including a space. It may also be preferable to group words of aword familyunder the representation of itsbase word. Thus,possible, impossible, possibilityare words of the same word family, represented by the base word*possib*. For statistical purpose, all these words are summed up under the base word form *possib*, allowing the ranking of a concept and form occurrence. Moreover, other languages may present specific difficulties. Such is the case of Chinese, which does not use spaces between words, and where a specified chain of several characters can be interpreted as either a phrase of unique-character words, or as a multi-character word.
It seems thatZipf's lawholds for frequency lists drawn from longer texts of any natural language. Frequency lists are a useful tool when building an electronic dictionary, which is a prerequisite for a wide range of applications incomputational linguistics.
German linguists define theHäufigkeitsklasse(frequency class)N{\displaystyle N}of an item in the list using thebase 2 logarithmof the ratio between its frequency and the frequency of the most frequent item. The most common item belongs to frequency class 0 (zero) and any item that is approximately half as frequent belongs in class 1. In the example list above, the misspelled wordoutragioushas a ratio of 76/3789654 and belongs in class 16.
where⌊…⌋{\displaystyle \lfloor \ldots \rfloor }is thefloor function.
Frequency lists, together withsemantic networks, are used to identify the least common, specialized terms to be replaced by theirhypernymsin a process ofsemantic compression.
Those lists are not intended to be given directly to students, but rather to serve as a guideline for teachers and textbook authors (Nation 1997).Paul Nation's modern language teaching summary encourages first to "move from high frequency vocabulary and special purposes [thematic] vocabulary to low frequency vocabulary, then to teach learners strategies to sustain autonomous vocabulary expansion" (Nation 2006).
Word frequency is known to have various effects (Brysbaert et al. 2011;Rudell 1993). Memorization is positively affected by higher word frequency, likely because the learner is subject to more exposures (Laufer 1997). Lexical access is positively influenced by high word frequency, a phenomenon calledword frequency effect(Segui et al.). The effect of word frequency is related to the effect ofage-of-acquisition, the age at which the word was learned.
Below is a review of available resources.
Word counting is an ancient field,[5]with known discussion back toHellenistictime. In 1944,Edward Thorndike,Irvin Lorgeand colleagues[6]hand-counted 18,000,000 running words to provide the first large-scale English language frequency list, before modern computers made such projects far easier (Nation 1997). 20th century's works all suffer from their age. In particular, words relating to technology, such as "blog," which, in 2014, was #7665 in frequency[7]in the Corpus of Contemporary American English,[8]was first attested to in 1999,[9][10][11]and does not appear in any of these three lists.
The Teacher Word Book contains 30,000 lemmas or ~13,000 word families (Goulden, Nation and Read, 1990). A corpus of 18 million written words was hand analysed. The size of its source corpus increased its usefulness, but its age, and language changes, have reduced its applicability (Nation 1997).
The General Service List contains 2,000 headwords divided into two sets of 1,000 words. A corpus of 5 million written words was analyzed in the 1940s. The rate of occurrence (%) for different meanings, and parts of speech, of the headword are provided. Various criteria, other than frequence and range, were carefully applied to the corpus. Thus, despite its age, some errors, and its corpus being entirely written text, it is still an excellent database of word frequency, frequency of meanings, and reduction of noise (Nation 1997). This list was updated in 2013 by Dr. Charles Browne, Dr. Brent Culligan and Joseph Phillips as theNew General Service List.
A corpus of 5 million running words, from written texts used in United States schools (various grades, various subject areas). Its value is in its focus on school teaching materials, and its tagging of words by the frequency of each word, in each of the school grade, and in each of the subject areas (Nation 1997).
These now contain 1 million words from a written corpus representing different dialects of English. These sources are used to produce frequency lists (Nation 1997).
A review has been made byNew & Pallier.
An attempt was made in the 1950s–60s with theFrançais fondamental. It includes the F.F.1 list with 1,500 high-frequency words, completed by a later F.F.2 list with 1,700 mid-frequency words, and the most used syntax rules.[12]It is claimed that 70 grammatical words constitute 50% of the communicatives sentence,[13][14]while 3,680 words make about 95~98% of coverage.[15]A list of 3,000 frequent words is available.[16]
The French Ministry of the Education also provide a ranked list of the 1,500 most frequentword families, provided by the lexicologueÉtienne Brunet.[17]Jean Baudot made a study on the model of the American Brown study, entitled "Fréquences d'utilisation des mots en français écrit contemporain".[18]
More recently, the projectLexique3provides 142,000 French words, withorthography,phonetic,syllabation,part of speech,gender, number of occurrence in the source corpus, frequency rank, associatedlexemes, etc., available under an open licenseCC-by-sa-4.0.[19]
This Lexique3 is a continuous study from which originate theSubtlex movementcited above.New et al. 2007made a completely new counting based on online film subtitles.
There have been several studies of Spanish word frequency (Cuetos et al. 2011).[20]
Chinese corpora have long been studied from the perspective of frequency lists. The historical way to learn Chinese vocabulary is based on characters frequency (Allanic 2003). American sinologistJohn DeFrancismentioned its importance for Chinese as a foreign language learning and teaching inWhy Johnny Can't Read Chinese(DeFrancis 1966). As a frequency toolkit, Da (Da 1998) and the Taiwanese Ministry of Education (TME 1997) provided large databases with frequency ranks for characters and words. TheHSKlist of 8,848 high and medium frequency words in thePeople's Republic of China, and theRepublic of China (Taiwan)'sTOPlist of about 8,600 common traditional Chinese words are two other lists displaying common Chinese words and characters. Following the SUBTLEX movement,Cai & Brysbaert 2010recently made a rich study of Chinese word and character frequencies.
Wiktionarycontains frequency lists in more languages.[21]
Most frequently used words in different languages based on Wikipedia or combined corpora.[22]
|
https://en.wikipedia.org/wiki/Lexical_frequency_analysis
|
Inlinguistics,lexicalizationis the process of adding words, set phrases, or word patterns to a language'slexicon.
Whetherword formationandlexicalizationrefer to the same process is controversial within the field of linguistics. Most linguists agree that there is a distinction, but there are many ideas of what the distinction is.[1]Lexicalization may be simple, for example borrowing a word from another language, or more involved, as incalque or loan translation, wherein a foreign phrase is translated literally, as inmarché aux puces, or in English, flea market.
Other mechanisms includecompounding,abbreviation, andblending.[2]Particularly interesting from the perspective of historical linguistics is the process by whichad hocphrases become set in the language, and eventually become new words (seelexicon). Lexicalization contrasts withgrammaticalization, and the relationship between the two processes is subject to some debate.
Inpsycholinguistics, lexicalization is the process of going frommeaningtosoundinspeechproduction. The most widely accepted model,speech production, in which an underlying concept is converted into aword, is at least a two-stage process.
First, thesemanticform (which is specified for meaning) is converted into alemma, which is an abstract form specified for semantic andsyntacticinformation (how a word can be used in a sentence), but not forphonologicalinformation (how a word is pronounced). The next stage is thelexeme, which is phonologically specified.[3]
Some recent work has challenged this model, suggesting for example that there is no lemma stage, and that syntactic information is retrieved in the semantic and phonological stages.[4]
One waysign languagesadopt new words is throughfingerspelling, but in some cases these borrowings undergo a systemic transformation in form and meaning to become what are referred to as 'lexicalized signs'[5]or 'loan signs.' These manual borrowings can act the same as other signs and can undergo regularly morphological changes.[6]For example, regular, predictable changes may be made to hand shape and palm orientation. Similarly, movement and location of the sign may add grammatical information. Letters may also be elided or omitted.[5][7]Lexicalized signs may also be developed from gestures related to handling an object.[8]
|
https://en.wikipedia.org/wiki/Lexicalization
|
This is a list of notablelexer generatorsandparser generatorsfor various language classes.
Regular languagesare a category of languages (sometimes termedChomsky Type 3) which can be matched by a state machine (more specifically, by adeterministic finite automatonor anondeterministic finite automaton) constructed from aregular expression. In particular, a regular language can match constructs like "A follows B", "Either A or B", "A, followed by zero or more instances of B", but cannot match constructs which require consistency between non-adjacent elements, such as "some instances of A followed by the same number of instances of B", and also cannot express the concept of recursive "nesting" ("every A is eventually followed by a matching B"). A classic example of a problem which a regular grammar cannot handle is the question of whether a given string contains correctly nested parentheses. (This is typically handled by a Chomsky Type 2 grammar, also termed acontext-free grammar.)
Context-free languagesare a category of languages (sometimes termedChomsky Type 2) which can be matched by a sequence of replacement rules, each of which essentially maps each non-terminal element to a sequence of terminal elements and/or other nonterminal elements. Grammars of this type can match anything that can be matched by aregular grammar, and furthermore, can handle the concept of recursive "nesting" ("every A is eventually followed by a matching B"), such as the question of whether a given string contains correctly nested parentheses. The rules of Context-free grammars are purely local, however, and therefore cannot handle questions that require non-local analysis such as "Does a declaration exist for every variable that is used in a function?". To do so technically would require a more sophisticated grammar, like a Chomsky Type 1 grammar, also termed acontext-sensitive grammar. However, parser generators for context-free grammars often support the ability for user-written code to introduce limited amounts of context-sensitivity. (For example, upon encountering a variable declaration, user-written code could save the name and type of the variable into an external data structure, so that these could be checked against later variable references detected by the parser.)
Thedeterministic context-free languagesare a proper subset of the context-free languages which can be efficiently parsed bydeterministic pushdown automata.
This table compares parser generators withparsing expression grammars, deterministicBoolean grammars.
This table compares parser generator languages with a generalcontext-free grammar, aconjunctive grammar, or aBoolean grammar.
This table compares parser generators withcontext-sensitive grammars.
|
https://en.wikipedia.org/wiki/List_of_parser_generators
|
This article lists notableprogram transformationsystemsby alphabetical order:
|
https://en.wikipedia.org/wiki/List_of_program_transformation_systems
|
Incomputer science,program synthesisis the task to construct aprogramthatprovablysatisfies a given high-levelformal specification. In contrast toprogram verification, the program is to be constructed rather than given; however, both fields make use of formal proof techniques, and both comprise approaches of different degrees of automation. In contrast toautomatic programmingtechniques, specifications in program synthesis are usually non-algorithmicstatements in an appropriatelogical calculus.[1]
The primary application of program synthesis is to relieve the programmer of the burden of writing correct, efficient code that satisfies a specification. However, program synthesis also has applications tosuperoptimizationand inference ofloop invariants.[2]
During the Summer Institute of Symbolic Logic at Cornell University in 1957,Alonzo Churchdefined the problem to synthesize a circuit from mathematical requirements.[3]Even though the work only refers to circuits and not programs, the work is considered to be one of the earliest descriptions of program synthesis and some researchers refer to program synthesis as "Church's Problem". In the 1960s, a similar idea for an "automatic programmer" was explored by researchers in artificial intelligence.[citation needed]
Since then, various research communities considered the problem of program synthesis. Notable works include the 1969 automata-theoretic approach byBüchiandLandweber,[4]and the works byMannaandWaldinger(c. 1980). The development of modernhigh-level programming languagescan also be understood as a form of program synthesis.
The early 21st century has seen a surge of practical interest in the idea of program synthesis in theformal verificationcommunity and related fields. Armando Solar-Lezama showed that it is possible to encode program synthesis problems inBoolean logicand use algorithms for theBoolean satisfiability problemto automatically find programs.[5]
In 2013, a unified framework for program synthesis problems called Syntax-guided Synthesis (stylized SyGuS) was proposed by researchers atUPenn,UC Berkeley, andMIT.[6]The input to a SyGuS algorithm consists of a logical specification along with acontext-free grammarof expressions that constrains the syntax of valid solutions.[7]For example, to synthesize a functionfthat returns the maximum of two integers, the logical specification might look like this:
(f(x,y) =x∨f(x,y) =y) ∧f(x,y) ≥ x ∧f(x,y) ≥ y
and the grammar might be:
where "ite" stands for "if-then-else". The expression
would be a valid solution, because it conforms to the grammar and the specification.
From 2014 through 2019, the yearly Syntax-Guided Synthesis Competition (or SyGuS-Comp) compared the different algorithms for program synthesis in a competitive event.[8]The competition used a standardized input format, SyGuS-IF, based onSMT-Lib 2. For example, the following SyGuS-IF encodes the problem of synthesizing the maximum of two integers (as presented above):
A compliant solver might return the following output:
Counter-example guided inductive synthesis (CEGIS) is an effective approach to building sound program synthesizers.[9][10]CEGIS involves the interplay of two components: ageneratorwhich generates candidate programs, and averifierwhich checks whether the candidates satisfy the specification.
Given a set of inputsI, a set of possible programsP, and a specificationS, the goal of program synthesis is to find a programpinPsuch that for all inputsiinI,S(p,i) holds. CEGIS is parameterized over a generator and a verifier:
CEGIS runs the generator and verifier run in a loop, accumulating counter-examples:
Implementations of CEGIS typically useSMT solversas verifiers.
CEGIS was inspired bycounterexample-guided abstraction refinement(CEGAR).[11]
The framework ofMannaandWaldinger, published in 1980,[12][13]starts from a user-givenfirst-order specification formula. For that formula, a proof is constructed, thereby also synthesizing afunctional programfromunifyingsubstitutions.
The framework is presented in a table layout, the columns containing:
Initially, background knowledge, pre-conditions, and post-conditions are entered into the table. After that, appropriate proof rules are applied manually. The framework has been designed to enhance human readability of intermediate formulas: contrary toclassical resolution, it does not requireclausal normal form, but allows one to reason with formulas of arbitrary structure and containing any junctors ("non-clausal resolution"). The proof is complete whentrue{\displaystyle {\it {true}}}has been derived in theGoalscolumn, or, equivalently,false{\displaystyle {\it {false}}}in theAssertionscolumn. Programs obtained by this approach are guaranteed to satisfy the specification formula started from; in this sense they arecorrect by construction.[14]Only a minimalist, yetTuring-complete,[15]purely functional programming language, consisting of conditional, recursion, and arithmetic and other operators[note 3]is supported.
Case studies performed within this framework synthesized algorithms to compute e.g.division,remainder,[16]square root,[17]term unification,[18]answers torelational databasequeries[19]and severalsorting algorithms.[20][21]
Proof rules include:
Murray has shown these rules to becompleteforfirst-order logic.[24]In 1986, Manna and Waldinger added generalized E-resolution andparamodulationrules to handle also equality;[25]later, these rules turned out to be incomplete (but neverthelesssound).[26]
As a toy example, a functional program to compute the maximumM{\displaystyle M}of two numbersx{\displaystyle x}andy{\displaystyle y}can be derived as follows.[citation needed]
Starting from the requirement description "The maximum is larger than or equal to any given number, and is one of the given numbers", the first-order formula∀X∀Y∃M:X≤M∧Y≤M∧(X=M∨Y=M){\displaystyle \forall X\forall Y\exists M:X\leq M\land Y\leq M\land (X=M\lor Y=M)}is obtained as its formal translation. This formula is to be proved. By reverseSkolemization,[note 4]the specification in line 10 is obtained, an upper- and lower-case letter denoting a variable and aSkolem constant, respectively.
After applying a transformation rule for thedistributive lawin line 11, the proof goal is a disjunction, and hence can be split into two cases, viz. lines 12 and 13.
Turning to the first case, resolving line 12 with the axiom in line 1 leads toinstantiationof the program variableM{\displaystyle M}in line 14. Intuitively, the last conjunct of line 12 prescribes the value thatM{\displaystyle M}must take in this case. Formally, the non-clausal resolution rule shown in line 57 above is applied to lines 12 and 1, with
yielding¬({\displaystyle \lnot (}true∧false) ∧ (x ≤ x ∧ y ≤ x ∧true){\displaystyle )},
which simplifies tox≤x∧y≤x{\displaystyle x\leq x\land y\leq x}.
In a similar way, line 14 yields line 15 and then line 16 by resolution. Also, the second case,x≤M∧y≤M∧y=M{\displaystyle x\leq M\land y\leq M\land y=M}in line 13, is handled similarly, yielding eventually line 18.
In a last step, both cases (i.e. lines 16 and 18) are joined, using the resolution rule from line 58; to make that rule applicable, the preparatory step 15→16 was needed. Intuitively, line 18 could be read as "in casex≤y{\displaystyle x\leq y}, the outputy{\displaystyle y}is valid (with respect to the original specification), while line 15 says "in casey≤x{\displaystyle y\leq x}, the outputx{\displaystyle x}is valid; the step 15→16 established that both cases 16 and 18 are complementary.[note 5]Since both line 16 and 18 comes with a program term, aconditional expressionresults in the program column. Since the goal formulatrue{\displaystyle {\textit {true}}}has been derived, the proof is done, and the program column of the "true{\displaystyle {\textit {true}}}" line contains the program.
|
https://en.wikipedia.org/wiki/Program_synthesis
|
Atransformation languageis acomputer languagedesigned to transform some input text in a certainformal languageinto a modified output text that meets some specific goal[clarification needed].
Program transformation systemssuch asStratego/XT,TXL,Tom,DMS, andASF+SDFall have transformation languages as a major component. The transformation languages for these systems are driven by declarative descriptions of the structure of the input text (typically a grammar), allowing them to be applied to wide variety of formal languages and documents.
Macrolanguages are a kind of transformation languages to transform a meta language into specific higher programming language likeJava,C++,Fortranor into lower-levelAssembly language.
In themodel-driven engineeringtechnical space, there aremodel transformation languages(MTLs), that take as input models conforming to a given metamodel and produce as output models conforming to a different metamodel. An example of such a language is theQVTOMGstandard.
There are also low-level languages such as the Lx family[1]implemented by thebootstrapping method. The L0 language may be considered as assembler for transformation languages. There is also a high-level graphical language built on upon Lx called MOLA.[2]
There are a number ofXML transformation languages. These includeTritium,XSLT,XQuery,STX,FXT,XDuce, CDuce,HaXml,XMLambda, and FleXML.
Concepts:
Languages and typical transforms:
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Transformation_language
|
Operation Reduction for Low Poweris anASICprogram transformationtechnique used to reduce the power consumed by a specific application. A program transformation is any operation that changes the computational structure such as nature and type of computational models, their interconnections, sequencing of operations keeping the input output behavior intact. We basically use Operation reduction to reduce the number of operations to be done to perform a task which reduces the hardware required and in turn power consumption. For example, in a given Application specificICreducing the number of independent additions required automatically reduces the adders required and also the power consumed.
Operation Substitution is one of the operation reduction techniques where certain costly operations are substituted by relatively cheaper operations which reduce power consumption. Some typical examples of operation substitution techniques are given as follows:
A popular example of Operation substitution is Butterfly example. In this example we need to compute two values yr= ar* xr- ai* xi, yi= ai* xr+ ar* xiwhich can be done sequentially computing the terms as shown in the expressions. But using operation substitution we can compute them using expressions, yr= ar* (xi+xr) - xi* (ai+ar), yi= ar* (xi+xr) + xr* (ai-ar) where the term (xi+xr) once computed can be used by both the computations from this we can easily workout that operations changed from number of operations changed from 4 multiplications to 3 and 2 Add/sub to 3. The critical path in the first method was of length 2 where as in the latter it is 3. So again this is a trade-off between delay and power.
Based on the frequency of input changing we can model the program so that less activity switching happens i.e. if certain inputs are less frequently changing then they should be made operating in single module so that the particular module is relatively passive compared to others. A+B+C+D can be computed as (A+B)+C+D or (A+B)+(C+D) the first one feeds C,D to two separate adders but if they are relatively slow changing then feeding them to same adder is more profitable.
Any synthesis has three parts Allocation (number and type of resources), Scheduling (operation scheduling), Binding (building the circuit). We can schedule the operations in a particular order based which value in the program activates how many modules. We always want the operations requiring more operations to be completed before hand to be scheduled later.
Consider the following code snippet:
Let us assume that the profiling has shown that most likely the value ofCis 2. Therefore, asCandCare independent and mutually exclusive we can modify the code to be
Here the multiplication is replaced by shifting operation which is triggered in most of the cases and is far cheaper than multiplication.
|
https://en.wikipedia.org/wiki/Operation_reduction_for_low_power
|
Parsing,syntax analysis, orsyntactic analysisis a process of analyzing astringofsymbols, either innatural language,computer languagesordata structures, conforming to the rules of aformal grammarby breaking it into parts. The termparsingcomes from Latinpars(orationis), meaningpart (of speech).[1]
The term has slightly different meanings in different branches oflinguisticsandcomputer science. Traditionalsentenceparsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such assentence diagrams. It usually emphasizes the importance of grammatical divisions such assubjectandpredicate.
Withincomputational linguisticsthe term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in aparse treeshowing their syntactic relation to each other, which may also containsemanticinformation.[citation needed]Some parsing algorithms generate aparse forestor list of parse trees from a string that issyntactically ambiguous.[2]
The term is also used inpsycholinguisticswhen describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc."[1]This term is especially common when discussing which linguistic cues help speakers interpretgarden-path sentences.
Within computer science, the term is used in the analysis ofcomputer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing ofcompilersandinterpreters. The term may also be used to describe a split or separation.
In data analysis, the term is often used to refer to a process extracting desired information from data, e.g., creating atime seriessignal from aXMLdocument.
The traditional grammatical exercise of parsing, sometimes known asclause analysis, involves breaking down a text into its componentparts of speechwith an explanation of the form, function, and syntactic relationship of each part.[3]This is determined in large part from study of the language'sconjugationsanddeclensions, which can be quite intricate for heavilyinflectedlanguages. To parse a phrase such as "man bites dog" involves noting that thesingularnoun "man" is thesubjectof the sentence, the verb "bites" is thethird person singularof thepresent tenseof the verb "to bite", and the singular noun "dog" is theobjectof the sentence. Techniques such assentence diagramsare sometimes used to indicate relation between elements in the sentence.
Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language.[citation needed]
In somemachine translationandnatural language processingsystems, written texts in human languages are parsed by computer programs.[4]Human sentences are not easily parsed by programs, as there is substantialambiguityin the structure of human language, whose usage is to convey meaning (orsemantics) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case.[5]So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.[citation needed]
In order to parse natural language data, researchers must first agree on thegrammarto be used. The choice of syntax is affected by bothlinguisticand computational concerns; for instance some parsing systems uselexical functional grammar, but in general, parsing for grammars of this type is known to beNP-complete.Head-driven phrase structure grammaris another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the PennTreebank.Shallow parsingaims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy isdependency grammarparsing.
Most modern parsers are at least partly statistical; that is, they rely on acorpusof training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts.(Seemachine learning.)Approaches which have been used include straightforwardPCFGs(probabilistic context-free grammars),[6]maximum entropy,[7]andneural nets.[8]Most of the more successful systems uselexicalstatistics (that is, they consider the identities of the words involved, as well as theirpart of speech). However such systems are vulnerable tooverfittingand require some kind ofsmoothingto be effective.[citation needed]
Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is notcontext-free, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of theCYK algorithm, usually with someheuristicto prune away unlikely analyses to save time.(Seechart parsing.)However some systems trade speed for accuracy using, e.g., linear-time versions of theshift-reducealgorithm. A somewhat recent development has beenparse rerankingin which the parser proposes some large number of analyses, and a more complex system selects the best option.[citation needed]Innatural language understandingapplications,semantic parsersconvert the text into a representation of its meaning.[9]
Inpsycholinguistics, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known asconnotation). This normally occurs as words are being heard or read.
Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the function of sentence parsing.[10]This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences.
The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound. Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech.[11]For example, in the sentence, "the horse raced past the barn fell", raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase.[12]Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader.
Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. "The boy saw the lady with the telescope", in which the ambiguous phrase with the telescope could modify the boy saw or the lady.)[11]
A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. "The rat the cat the man hit chased ran into the trap".) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging for mental parsing, again because of ambiguity of syntactic relationship.[13]
Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected.[14]The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated.[15]
Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood.[16]
Discourse analysisexamines ways to analyze language use and semiotic events. Persuasive language may be calledrhetoric.
Aparseris a software component that takes input data (typically text) and builds adata structure– often some kind ofparse tree,abstract syntax treeor other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separatelexical analyser, which creates tokens from the sequence of input characters; alternatively, these can be combined inscannerless parsing. Parsers may be programmed by hand or may be automatically or semi-automatically generated by aparser generator. Parsing is complementary totemplating, which produces formattedoutput.These may be applied to different domains, but often appear together, such as thescanf/printfpair, or the input (front end parsing) and output (back end code generation) stages of acompiler.
The input to a parser is typically text in somecomputer language, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such asscanf, to complex programs such as the frontend of aC++ compileror theHTMLparser of aweb browser. An important class of simple parsing is done usingregular expressions, in which a group of regular expressions defines aregular languageand a regular expression engine automatically generating a parser for that language, allowingpattern matchingand extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser.
The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML orXMLtext; these examples aremarkup languages. In the case ofprogramming languages, a parser is a component of acompilerorinterpreter, which parses thesource codeof acomputer programming languageto create some form of internal representation; the parser is a key step in thecompiler frontend. Programming languages tend to be specified in terms of adeterministic context-free grammarbecause fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – seeone-pass compilerandmulti-pass compiler.
The implied disadvantages of a one-pass compiler can largely be overcome by addingfix-ups, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known.
Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at thesemantic analysis(contextual analysis) step.
For example, inPythonthe following is syntactically valid code:
The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use:
The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic.
The first stage is the token generation, orlexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar ofregular expressions. For example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it into the tokens12,*,(,3,+,4,),^,2, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters*,+,^,(and)mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated.
The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to acontext-free grammarwhich recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed withattribute grammars.
The final phase issemantic parsingor analysis, which is working out the implications of the expression just validated and taking the appropriate action.[17]In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars can also be used to define these actions.
Thetaskof the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways:
LL parsersandrecursive-descent parserare examples of top-down parsers that cannot accommodateleft recursiveproduction rules. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsingambiguous context-free grammars, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan[20][21]which accommodateambiguityandleft recursionin polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a givencontext-free grammar.
An important distinction with regard to parsers is whether a parser generates aleftmost derivationor arightmost derivation(seecontext-free grammar). LL parsers will generate a leftmostderivationand LR parsers will generate a rightmost derivation (although usually in reverse).[18]
Somegraphical parsingalgorithms have been designed forvisual programming languages.[22][23]Parsers for visual languages are sometimes based ongraph grammars.[24]
Adaptive parsingalgorithms have been used to construct "self-extending"natural language user interfaces.[25]
A simple parser implementation reads the entire input file, performs an intermediate computation or translation, and then writes the entire output file,
such as in-memorymulti-pass compilers.
Alternative parser implementation approaches:
Some of the well known parser development tools include the following:
Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant toLL,LR, andLALR parsers, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1).
Mostprogramming languages, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change[citation needed]to this trend came in 1990 whenTerence ParrcreatedANTLRfor his Ph.D. thesis, aparser generatorfor efficient LL(k) parsers, wherekis any fixed value.
LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce).
Lookahead has two advantages.[clarification needed]
Example: Parsing the Expression1 + 2 * 3[dubious–discuss]
Most programming languages (except for a few such as APL and Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is1 + (2 * 3).
Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax.
Initially Input = [1, +, 2, *, 3]
The parse tree and resulting code from it is not correct according to language semantics.
To correctly parse without lookahead, there are three solutions:
The parse tree generated is correct and simplymore efficient[clarify][citation needed]than non-lookahead parsers. This is the strategy followed inLALR parsers.
|
https://en.wikipedia.org/wiki/Parser
|
Innatural language processing,semantic role labeling(also calledshallow semantic parsingorslot-filling) is the process that assigns labels to words or phrases in a sentence that indicates theirsemantic rolein the sentence, such as that of anagent, goal, or result.
It serves to find the meaning of the sentence. To do this, it detects the arguments associated with thepredicateorverbof asentenceand how they are classified into their specificroles. A common example is the sentence "Mary sold the book to John." The agent is "Mary," the predicate is "sold" (or rather, "to sell,") the theme is "the book," and the recipient is "John." Another example is how "the book belongs to me" would need two labels such as "possessed" and "possessor" and "the book was sold to John" would need two other labels such as theme and recipient, despite these two clauses being similar to "subject" and "object" functions.[1]
In 1968, the first idea for semantic role labeling was proposed byCharles J. Fillmore.[2]His proposal led to theFrameNetproject which produced the first major computational lexicon that systematically described many predicates and their corresponding roles. Daniel Gildea (Currently atUniversity of Rochester, previouslyUniversity of California, Berkeley/International Computer Science Institute) andDaniel Jurafsky(currently teaching atStanford University, but previously working atUniversity of ColoradoandUC Berkeley) developed the first automatic semantic role labeling system based on FrameNet. ThePropBankcorpus added manually created semantic role annotations to thePenn Treebankcorpus ofWall Street Journaltexts. Many automatic semantic role labeling systems have used PropBank as a training dataset to learn how to annotate new sentences automatically.[3]
Semantic role labeling is mostly used for machines to understand the roles of words within sentences.[4]This benefits applications similar toNatural Language Processingprograms that need to understand not just the words of languages, but how they can be used in varying sentences.[5]A better understanding of semantic role labeling could lead to advancements inquestion answering,information extraction,automatic text summarization,text data mining, andspeech recognition.[6]
|
https://en.wikipedia.org/wiki/Semantic_role_labeling
|
Inpsycholinguistics,language processingrefers to the way humans usewordstocommunicateideas and feelings, and how such communications areprocessedand understood. Language processing is considered to be a uniquely human ability that is not produced with the samegrammaticalunderstanding or systematicity in even human'sclosest primate relatives.[1]
Throughout the 20th century the dominant model[2]for language processing in the brain was theGeschwind–Lichteim–Wernicke model, which is based primarily on the analysis ofbrain-damagedpatients. However, due to improvements in intra-corticalelectrophysiologicalrecordings of monkey andhuman brains, as well non-invasive techniques such asfMRI,PET,MEGandEEG, anauditory pathwayconsisting of two parts[3][4]has been revealed and atwo-streams modelhas been developed. In accordance with this model, there are two pathways that connect theauditory cortexto thefrontal lobe, each pathway accounting for different linguistic roles. Theauditory ventral streampathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. Theauditory dorsal streamin both humans and non-human primates is responsible forsound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in theleft hemisphere) is also responsible forspeech production, speech repetition,lip-reading, and phonologicalworking memoryandlong-term memory. In accordance with the 'from where to what' model of language evolution,[5][6]the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.
The division of the two streams first occurs in theauditory nervewhere the anterior branch enters the anteriorcochlear nucleusin the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream.[7]: 8
Language processing can also occur in relation tosigned languagesorwritten content.
Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke–Lichtheim–Geschwind model.[8][2][9]The Wernicke–Lichtheim–Geschwind model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke's area) that is located in the lefttemporoparietal junction. This region then projects to a word production center (Broca's area) that is located in the leftinferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke's area and all language output to funnel via Broca's area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke's and Broca's regions to human language rendered it extremely difficult to identify their homologues in other primates.[10]With the advent of thefMRIand its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions.[11][12][13][14][15][16][17]The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.
In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys[18][19]and later elaborated via histological staining[20][21][22]andfMRIscanning studies,[23]3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM).[20][24][25][26]Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region ofHeschl's gyrus,[27][28]and by mapping the tonotopic organization of the human primary auditory fields with high resolutionfMRIand comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1).[29][30][31][32][33]Intra-cortical recordings from the humanauditory cortexfurther demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl's gyrus (area hR) projects primarily to the middle-anteriorsuperior temporal gyrus(mSTG-aSTG) and the posterior Heschl's gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and theplanum temporale(area PT; Figure 1 top right).[34][35]Consistent with connections from area hR to the aSTG and hA1 to the pSTG is anfMRIstudy of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG.[36]This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.[37]
Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in theinferior frontal gyrus(IFG)[38][39]andamygdala.[40]Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG.[41][42][43][44][45][46]This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG.[47][39]Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS).[48][49][50][51][52][53]This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys withdiffusion tensor imagingtechniques indicates of similar connections of the AVS and ADS in the two species (Monkey,[52]Human[54][55][56][57][58][59]). In humans, the pSTG was shown to project to the parietal lobe (sylvianparietal-temporal junction-inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows).
The auditory ventral stream (AVS) connects theauditory cortexwith themiddle temporal gyrusandtemporal pole, which in turn connects with theinferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The functions of the AVS include the following.
Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1,[60]and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl's gyrus (area hR) than posterior Heschl's gyrus (area hA1).[61]In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects.[18]The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings.[41][19][62]and functional imaging[63][42][43]OnefMRImonkey study further demonstrated a role of the aSTG in the recognition of individual voices.[42]The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise,[64][65]and with the recognition of spoken words,[66][67][68][69][70][71][72]voices,[73]melodies,[74][75]environmental sounds,[76][77][78]and non-speech communicative sounds.[79]Ameta-analysisoffMRIstudies[80]further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language.[81]Consistently, electro stimulation to the aSTG of this patient resulted in impairedspeech perception[81](see also[82][83]for similar results). Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music.[81]AnfMRIstudy of a patient with impaired sound recognition (auditory agnosia) due tobrainstemdamage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds.[36]Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory,[46]and the debilitating effect of induced lesions to this region on working memory recall,[84][85][86]further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG.[87]andfMRI[88]The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.[89]
In humans, downstream to the aSTG, the MTG and TP are thought to constitute thesemantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by[3][4]discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients withsemantic dementiaorherpes simplex virus encephalitis) are reported[90][91]with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e.,semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage[14][92]and were shown to occur in non-aphasic patients after electro-stimulation to this region.[93][83]or the underlying white matter pathway[94]Two meta-analyses of thefMRIliterature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text;[66][95]and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.[96]
In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept 'blue' and 'shirt' to create the concept of a 'blue shirt'). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds.[97][98][99][100][101][102][103][104]OnefMRIstudy[105]in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study[106]that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.[14][107][108]See review[109]for more information on this topic.
In contradiction to the Wernicke–Lichtheim–Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure[110]) or intra-cortical recordings from each hemisphere[96]provided evidence thatsound recognitionis processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e.,split-brainpatients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere[111](The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e.,auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does.[112][113]Finally, as mentioned earlier, anfMRIscan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices,[36]and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.[81]
The auditory dorsal stream connects the auditory cortex with theparietal lobe, which in turn connects withinferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory.
Studies of present-day humans have demonstrated a role for the ADS in speech production, particularly in the vocal expression of the names of objects. For instance, in a series of studies in which sub-cortical fibers were directly stimulated[94]interference in the left pSTG andIPLresulted in errors during object-naming tasks, and interference in the left IFG resulted in speech arrest. Magnetic interference in the pSTG and IFG of healthy participants also produced speech errors and speech arrest, respectively[114][115]One study has also reported that electrical stimulation of the leftIPLcaused patients to believe that they had spoken when they had not and that IFG stimulation caused patients to unconsciously move their lips.[116]The contribution of the ADS to the process of articulating the names of objects could be dependent on the reception of afferents from the semantic lexicon of the AVS, as an intra-cortical recording study reported of activation in the posterior MTG prior to activation in the Spt-IPLregion when patients named objects in pictures[117]Intra-cortical electrical stimulation studies also reported that electrical interference to the posterior MTG was correlated with impaired object naming[118][82]
Additionally, lesion studies of stroke patients have provided evidence supporting the dual stream model's role in speech production. Recent research using multivariate lesion/disconnectome symptom mapping has shown that lower scores in speech production tasks are associated with lesions and abnormalities in the left inferior parietal lobe and frontal lobe. These findings from stroke patients further support the involvement of the dorsal stream pathway in speech production, complementing the stimulation and interference studies in healthy participants.[119]
Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis offMRIstudies[120](Turkeltaub and Coslett, 2010), in which the auditory perception ofphonemeswas contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[121]The involvement of the ADS in both speech perception and production has been further illuminated in several pioneering functional imaging studies that contrasted speech perception with overt or covert speech production.[122][123][124]These studies demonstrated that the pSTS is active only during the perception of speech, whereas area Spt is active during both the perception and production of speech. The authors concluded that the pSTS projects to area Spt, which converts the auditory input into articulatory movements.[125][126]Similar results have been obtained in a study in which participants' temporal and parietal lobes were electrically stimulated. This study reported that electrically stimulating the pSTG region interferes with sentence comprehension and that stimulation of the IPL interferes with the ability to vocalize the names of objects.[83]The authors also reported that stimulation in area Spt and the inferior IPL induced interference during both object-naming and speech-comprehension tasks. The role of the ADS in speech repetition is also congruent with the results of the other functional imaging studies that have localized activation during speech repetition tasks to ADS regions.[127][128][129]An intra-cortical recording study that recorded activity throughout most of the temporal, parietal and frontal lobes also reported activation in the pSTG, Spt, IPL and IFG when speech repetition is contrasted with speech perception.[130]Neuropsychological studies have also found that individuals with speech repetition deficits but preserved auditory comprehension (i.e.,conduction aphasia) suffer from circumscribed damage to the Spt-IPL area[131][132][133][134][135][136][137]or damage to the projections that emanate from this area and target the frontal lobe[138][139][140][141]Studies have also reported a transientspeech repetitiondeficit in patients after direct intra-cortical electrical stimulation to this same region.[11][142][143]Insight into the purpose of speech repetition in the ADS is provided by longitudinal studies of children that correlated the learning of foreign vocabulary with the ability to repeat nonsense words.[144][145]
In addition to repeating and producing speech, the ADS appears to have a role in monitoring the quality of the speech output. Neuroanatomical evidence suggests that the ADS is equipped with descending connections from the IFG to the pSTG that relay information about motor activity (i.e., corollary discharges) in the vocal apparatus (mouth, tongue, vocal folds). This feedback marks the sound perceived during speech production as self-produced and can be used to adjust the vocal apparatus to increase the similarity between the perceived and emitted calls. Evidence for descending connections from the IFG to the pSTG has been offered by a study that electrically stimulated the IFG during surgical operations and reported the spread of activation to the pSTG-pSTS-Spt region[146]A study[147]that compared the ability of aphasic patients with frontal, parietal or temporal lobe damage to quickly and repeatedly articulate a string of syllables reported that damage to the frontal lobe interfered with the articulation of both identical syllabic strings ("Bababa") and non-identical syllabic strings ("Badaga"), whereas patients with temporal or parietal lobe damage only exhibited impairment when articulating non-identical syllabic strings. Because the patients with temporal and parietal lobe damage were capable of repeating the syllabic string in the first task, their speech perception and production appears to be relatively preserved, and their deficit in the second task is therefore due to impaired monitoring. Demonstrating the role of the descending ADS connections in monitoring emitted calls, anfMRIstudy instructed participants to speak under normal conditions or when hearing a modified version of their own voice (delayed first formant) and reported that hearing a distorted version of one's own voice results in increased activation in the pSTG.[148]Further demonstrating that the ADS facilitates motor feedback during mimicry is an intra-cortical recording study that contrasted speech perception and repetition.[130]The authors reported that, in addition to activation in the IPL and IFG, speech repetition is characterized by stronger activation in the pSTG than during speech perception.
Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis offMRIstudies[120]in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[149]Consistent with the role of the ADS in discriminating phonemes,[120]studies have ascribed the integration of phonemes and their corresponding lip movements (i.e.,visemes) to the pSTS of the ADS. For example, anfMRIstudy[150]has correlated activation in the pSTS with theMcGurk illusion(in which hearing the syllable "ba" while seeing the viseme "ga" results in the perception of the syllable "da"). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion.[151]The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words.[152]Corroborating evidence has been provided by anfMRIstudy[153]that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, anfMRIstudy[154]that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation. For a review presenting additional converging evidence regarding the role of the pSTS and ADS in phoneme-viseme integration see.[155]
Empirical research has demonstrated that visual lip movements enhance speech processing along the auditory dorsal stream, particularly in noisy conditions. Recent studies[156]discovered that the dorsal stream regions, including frontal speech motor areas and supramarginal gyrus, show improved neural representations of speech sounds when visual lip movements are available.
A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study[157][158]examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a "goat" a "sheep," an example ofsemantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying "gof" instead of "goat," an example ofphonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation.[83][159][94]Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names.[160]A study that induced magnetic interference in participants' IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object's characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables.[161]An MEG study has also correlated recovery fromanomia(a disorder characterized by an impaired ability to name objects) with changes in IPL activation.[162]Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG.[163][164]Because evidence shows that, inbilinguals, different phonological representations of the same word share the same semantic representation,[165]this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size.[166][167]Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL[168][169][170]
Working memoryis often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding[171][172]that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (thephonological similarity effect).[171]Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory.[173]Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory[174][175][176][177]Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory.[172][178][179][180]It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension[181]For a review of the role of the ADS in working memory, see.[182]
Studies have shown that performance on phonological working memory tasks correlates with properties of the left dorsal branch of the arcuate fasciculus (AF), which connects posterior temporal language regions with attention-regulating areas in the middle frontal gyrus. The arcuate fasciculus is a white matter pathway in the brain which contains two branches: a ventral branch connecting Wernicke's area with Broca's area and a dorsal branch connecting the posterior temporal region with the middle frontal gyrus. This dorsal branch appears to be particularly important for phonological working memory processes.[183]
Language-processing research informstheories of language. The primary theoretical question is whether linguistic structures follow from the brain structures or vice versa. Externalist models, such asFerdinand de Saussure'sstructuralism, argue that language as a social phenomenon is external to the brain. The individual receives the linguistic system from the outside, and the given language shapes the individual's brain.[184]
This idea is opposed by internalist models includingNoam Chomsky'stransformational generative grammar,George Lakoff'sCognitive Linguistics, andJohn A. Hawkins'sefficiency hypothesis. According to Chomsky, language is acquired from aninnate brain structureindependently of meaning.[185]Lakoff argues that language emerges from thesensory systems.[186]Hawkins hypothesizes thatcross-linguisticallyprevalent patterns are based on the brain's natural processing preferences.[187]
Additionally, models inspired byRichard Dawkins'smemetics, includingConstruction GrammarandUsage-Based Linguistics, advocate a two-way model arguing that the brain shapes language, and language shapes the brain.[188][189]
Evidence fromneuroimagingstudies points towards the externalist position.ERPstudies suggest that language processing is based on the interaction of syntax and semantics, and the research does not support innate grammatical structures.[190][191]MRIstudies suggest that the structural characteristics of the child'sfirst languageshapes the processingconnectomeof the brain.[192]Processing research has failed to find support for the inverse idea that syntactic structures reflect the brain's natural processing preferences cross-linguistically.[193]
The auditory dorsal stream also has non-language related functions, such as sound localization[194][195][196][197][198]and guidance of eye movements.[199][200]Recent studies also indicate a role of the ADS in localization of family/tribe members, as a study[201]that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. AnfMRI[202]study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices.
It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the 'From where to what' model of language evolution[203][204]In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents' vocalizations, initially by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences.
Neuroscientific research has provided a scientific understanding of howsign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country.[205]
By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used.[205]Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language.[205]
Previous hypotheses have been made that damage to Broca's area or Wernicke’s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and/or repeated.[205]In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts.
There are obvious patterns for utilizing and processing language. In sign language, Broca’s area is activated while processing sign language employs Wernicke’s area similar to that of spoken language.[205]
There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally.[206]Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly.[206]LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion.[206]
There is a comparatively small body of research on the neurology of reading and writing.[207]Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language.[208]English orthographyis less transparent than that of other languages using aLatin script.[207]Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script.[207]
In terms of spelling, English words can be divided into three categories – regular, irregular, and “novel words” or “nonwords.” Regular words are those in which there is a regular, one-to-one correspondence betweengraphemeandphonemein spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such asnonce wordsandonomatopoeia.[207]
An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules.[207]
The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words.[207]However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types.[207]Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords.[207]
More recently, neuroimaging studies usingpositron emission tomographyandfMRIhave suggested a balanced model in which the reading of all word types begins in thevisual word form area, but subsequently branches off into different routes depending upon whether or not access to lexical memory or semantic information is needed (which would be expected with irregular words under a dual-route model).[207]A 2007fMRIstudy found that subjects asked to produce regular words in a spelling task exhibited greater activation in the left posteriorSTG, an area used for phonological processing, while the spelling of irregular words produced greater activation of areas used for lexical memory and semantic processing, such as the leftIFGand leftSMGand both hemispheres of theMTG.[207]Spelling nonwords was found to access members of both pathways, such as the left STG and bilateral MTG andITG.[207]Significantly, it was found that spelling induces activation in areas such as the leftfusiform gyrusand left SMG that are also important in reading, suggesting that a similar pathway is used for both reading and writing.[207]
Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has amorphologicaland aphonologicalcomponent, either of which can be recorded by awriting system. Scripts recording words and morphemes are consideredlogographic, while those recording phonological segments, such assyllabariesandalphabets, are phonographic.[208]Most systems combine the two and have both logographic and phonographic characters.[208]
In terms of complexity, writing systems can be characterized as "transparent" or "opaque" and as "shallow" or "deep". A "transparent" system exhibits an obvious correspondence between grapheme and sound, while in an "opaque" system this relationship is less obvious. The terms "shallow" and "deep" refer to the extent that a system's orthography represents morphemes as opposed to phonological segments.[208]Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users.[208]It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography.
|
https://en.wikipedia.org/wiki/Language_processing
|
Neurolinguisticsis the study ofneuralmechanisms in thehuman brainthat control the comprehension, production, and acquisition oflanguage. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such asneuroscience,linguistics,cognitive science,communication disordersandneuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models inpsycholinguisticsandtheoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, usingaphasiology,brain imaging,electrophysiology, andcomputer modeling.[1]
Neurolinguistics is historically rooted in the development in the 19th century ofaphasiology, the study of linguistic deficits (aphasias) occurring as the result ofbrain damage.[2]Aphasiology attempts to correlate structure to function by analyzing the effect of brain injuries on language processing.[3]One of the first people to draw a connection between a particular brain area and language processing wasPaul Broca,[2]aFrenchsurgeon who conducted autopsies on numerous individuals who had speaking deficiencies, and found that most of them had brain damage (orlesions) on the leftfrontal lobe, in an area now known asBroca's area.Phrenologistshad made the claim in the early 19th century that different brain regions carried out different functions and that language was mostly controlled by the frontal regions of the brain, but Broca's research was possibly the first to offer empirical evidence for such a relationship,[4][5]and has been described as "epoch-making"[6]and "pivotal"[4]to the fields of neurolinguistics and cognitive science. Later,Carl Wernicke, after whomWernicke's areais named, proposed that different areas of the brain were specialized for different linguistic tasks, with Broca's area handling themotorproduction of speech, and Wernicke's area handling auditory speech comprehension.[2][3]The work of Broca and Wernicke established the field of aphasiology and the idea that language can be studied through examining physical characteristics of the brain.[5]Early work in aphasiology also benefited from the early twentieth-century work ofKorbinian Brodmann, who "mapped" the surface of the brain, dividing it up into numbered areas based on each area'scytoarchitecture(cell structure) and function;[7]these areas, known asBrodmann areas, are still widely used in neuroscience today.[8]
The coining of the termneurolinguisticsin the late 1940s and 1950s is attributed to Edith Crowell Trager, Henri Hecaen and Alexandr Luria. Luria's 1976 book "Basic Problems of Neurolinguistics" is likely the first book with "neurolinguistics" in the title. Harry Whitaker popularized neurolinguistics in the United States in the 1970s, founding the journal "Brain and Language" in 1974.[9]
Although aphasiology is the historical core of neurolinguistics, in recent years the field has broadened considerably, thanks in part to the emergence of new brain imaging technologies (such asPETandfMRI) and time-sensitive electrophysiological techniques (EEGandMEG), which can highlight patterns of brain activation as people engage in various language tasks.[2][10][11]Electrophysiological techniques, in particular, emerged as a viable method for the study of language in 1980 with the discovery of theN400, a brain response shown to be sensitive tosemanticissues in language comprehension.[12][13]The N400 was the first language-relevantevent-related potentialto be identified, and since its discovery EEG and MEG have become increasingly widely used for conducting language research.[14]
Neurolinguistics is closely related to the field ofpsycholinguistics, which seeks to elucidate the cognitive mechanisms of language by employing the traditional techniques ofexperimental psychology. Today, psycholinguistic and neurolinguistic theories often inform one another, and there is much collaboration between the two fields.[13][15]
Much work in neurolinguistics involves testing and evaluating theories put forth by psycholinguists and theoretical linguists. In general, theoretical linguists propose models to explain the structure of language and how language information is organized, psycholinguists propose models and algorithms to explain how language information is processed in the mind, and neurolinguists analyze brain activity to infer how biological structures (populations and networks of neurons) carry out those psycholinguistic processing algorithms.[16]For example, experiments insentence processinghave used theELAN,N400, andP600brain responses to examine how physiological brain responses reflect the different predictions of sentence processing models put forth by psycholinguists, such asJanet FodorandLyn Frazier's "serial" model,[17]and Theo Vosse and Gerard Kempen's "unification model".[15]Neurolinguists can also make new predictions about the structure and organization of language based on insights about the physiology of the brain, by "generalizing from the knowledge of neurological structures to language structure".[18]
Neurolinguistics research is carried out in all the major areas of linguistics; the main linguistic subfields, and how neurolinguistics addresses them, are given in the table below.
Neurolinguistics research investigates several topics, including where language information is processed, howlanguage processingunfolds over time, how brain structures are related to language acquisition and learning, and how neurophysiology can contribute tospeech and language pathology.
Much work in neurolinguistics has, like Broca's and Wernicke's early studies, investigated the locations of specific language "modules" within the brain. Research questions include what course language information follows through the brain as it is processed,[19]whether or not particular areas specialize in processing particular sorts of information,[20]how different brain regions interact with one another in language processing,[21]and how the locations of brain activation differ when a subject is producing or perceiving a language other than his or her first language.[22][23][24]
Another area of neurolinguistics literature involves the use ofelectrophysiologicaltechniques to analyze the rapid processing of language in time.[2]The temporal ordering of specificpatterns of brain activitymay reflect discrete computational processes that the brain undergoes during language processing; for example, one neurolinguistic theory of sentence parsing proposes that three brain responses (theELAN,N400, andP600) are products of three different steps in syntactic and semantic processing.[25]
Another topic is the relationship between brain structures andlanguage acquisition.[26]Research in first language acquisition has already established that infants from all linguistic environments go through similar and predictable stages (such asbabbling), and some neurolinguistics research attempts to find correlations between stages of language development and stages of brain development,[27]while other research investigates the physical changes (known asneuroplasticity) that the brain undergoes duringsecond language acquisition, when adults learn a new language.[28]Neuroplasticity is observed when both Second Language acquisition and Language Learning experience are induced, the result of this language exposure concludes that an increase of gray and white matter could be found in children, young adults and the elderly.[29]
Neurolinguistic techniques are also used to study disorders and breakdowns in language, such asaphasiaanddyslexia, and how they relate to physical characteristics of the brain.[23][27]
Since one of the focuses of this field is the testing of linguistic and psycholinguistic models, the technology used for experiments is highly relevant to the study of neurolinguistics. Modern brain imaging techniques have contributed greatly to a growing understanding of the anatomical organization of linguistic functions.[2][23]Brain imaging methods used in neurolinguistics may be classified intohemodynamicmethods,electrophysiologicalmethods, and methods that stimulate the cortex directly.
Hemodynamic techniques take advantage of the fact that when an area of the brain works at a task, blood is sent to supply that area with oxygen (in what is known as the Blood Oxygen Level-Dependent, or BOLD, response).[30]Such techniques includePETandfMRI. These techniques provide highspatial resolution, allowing researchers to pinpoint the location of activity within the brain;[2]temporal resolution(or information about the timing of brain activity), on the other hand, is poor, since the BOLD response happens much more slowly than language processing.[11][31]In addition to demonstrating which parts of the brain may subserve specific language tasks or computations,[20][25]hemodynamic methods have also been used to demonstrate how the structure of the brain's language architecture and the distribution of language-related activation may change over time, as a function of linguistic exposure.[22][28]
In addition to PET and fMRI, which show which areas of the brain are activated by certain tasks, researchers also usediffusion tensor imaging(DTI), which shows the neural pathways that connect different brain areas,[32]thus providing insight into how different areas interact.Functional near-infrared spectroscopy(fNIRS) is another hemodynamic method used in language tasks.[33]
Electrophysiological techniques take advantage of the fact that when a group of neurons in the brain fire together, they create anelectric dipoleor current. The technique ofEEGmeasures this electric current using sensors on the scalp, whileMEGmeasures the magnetic fields that are generated by these currents.[34]In addition to these non-invasive methods,electrocorticographyhas also been used to study language processing. These techniques are able to measure brain activity from one millisecond to the next, providing excellenttemporal resolution, which is important in studying processes that take place as quickly as language comprehension and production.[34]On the other hand, the location of brain activity can be difficult to identify in EEG;[31][35]consequently, this technique is used primarily tohowlanguage processes are carried out, rather thanwhere. Research using EEG and MEG generally focuses onevent-related potentials(ERPs),[31]which are distinct brain responses (generally realized as negative or positive peaks on a graph of neural activity) elicited in response to a particular stimulus. Studies using ERP may focus on each ERP'slatency(how long after the stimulus the ERP begins or peaks),amplitude(how high or low the peak is), ortopography(where on the scalp the ERP response is picked up by sensors).[36]Some important and common ERP components include theN400(a negativity occurring at a latency of about 400 milliseconds),[31]themismatch negativity,[37]theearly left anterior negativity(a negativity occurring at an early latency and a front-left topography),[38]theP600,[14][39]and thelateralized readiness potential.[40]
Neurolinguists employ a variety of experimental techniques in order to use brain imaging to draw conclusions about how language is represented and processed in the brain. These techniques include thesubtractionparadigm,mismatchdesign,violation-basedstudies, various forms ofpriming, anddirect stimulationof the brain.
Many language studies, particularly infMRI, use the subtraction paradigm,[41]in which brain activation in a task thought to involve some aspect of language processing is compared against activation in a baseline task thought to involve similar non-linguistic processes but not to involve the linguistic process. For example, activations while participants read words may be compared to baseline activations while participants read strings of random letters (in attempt to isolate activation related to lexical processing—the processing of real words), or activations while participants readsyntacticallycomplex sentences may be compared to baseline activations while participants read simpler sentences.
The mismatch negativity (MMN) is a rigorously documented ERP component frequently used in neurolinguistic experiments.[37][42]It is an electrophysiological response that occurs in the brain when a subject hears a "deviant" stimulus in a set of perceptually identical "standards" (as in the sequences s s s s s s d d s s s s s s d s s s s s d).[43][44]Since the MMN is elicited only in response to a rare "oddball" stimulus in a set of other stimuli that are perceived to be the same, it has been used to test how speakers perceive sounds and organize stimuli categorically.[45][46]For example, a landmark study byColin Phillipsand colleagues used the mismatch negativity as evidence that subjects, when presented with a series of speech sounds withacousticparameters, perceived all the sounds as either /t/ or /d/ in spite of the acoustic variability, suggesting that the human brain has representations of abstractphonemes—in other words, the subjects were "hearing" not the specific acoustic features, but only the abstract phonemes.[43]In addition, the mismatch negativity has been used to study syntactic processing and the recognition ofword category.[37][42][47]
Many studies in neurolinguistics take advantage of anomalies orviolationsofsyntacticorsemanticrules in experimental stimuli, and analyzing the brain responses elicited when a subject encounters these violations. For example, sentences beginning with phrases such as *the garden was on the worked,[48]which violates an Englishphrase structure rule, often elicit a brain response called theearly left anterior negativity(ELAN).[38]Violation techniques have been in use since at least 1980,[38]when Kutas and Hillyard first reportedERPevidence thatsemanticviolations elicited an N400 effect.[49]Using similar methods, in 1992, Lee Osterhout first reported theP600response to syntactic anomalies.[50]Violation designs have also been used for hemodynamic studies (fMRI and PET): Embick and colleagues, for example, used grammatical and spelling violations to investigate the location of syntactic processing in the brain using fMRI.[20]Another common use of violation designs is to combine two kinds of violations in the same sentence and thus make predictions about how different language processes interact with one another; this type of crossing-violation study has been used extensively to investigate howsyntacticandsemanticprocesses interact while people read or hear sentences.[51][52]
In psycholinguistics and neurolinguistics,primingrefers to the phenomenon whereby a subject can recognize a word more quickly if he or she has recently been presented with a word that is similar in meaning[53]ormorphologicalmakeup (i.e., composed of similar parts).[54]If a subject is presented with a "prime" word such asdoctorand then a "target" word such asnurse, if the subject has a faster-than-usual response time tonursethen the experimenter may assume that wordnursein the brain had already been accessed when the worddoctorwas accessed.[55]Priming is used to investigate a wide variety of questions about how words are stored and retrieved in the brain[54][56]and how structurally complex sentences are processed.[57]
Transcranial magnetic stimulation(TMS), a new noninvasive[58]technique for studying brain activity, uses powerful magnetic fields that are applied to the brain from outside the head.[59]It is a method of exciting or interrupting brain activity in a specific and controlled location, and thus is able to imitate aphasic symptoms while giving the researcher more control over exactly which parts of the brain will be examined.[59]As such, it is a less invasive alternative todirect cortical stimulation, which can be used for similar types of research but requires that the subject's scalp be removed, and is thus only used on individuals who are already undergoing a major brain operation (such as individuals undergoing surgery forepilepsy).[60]The logic behind TMS and direct cortical stimulation is similar to the logic behind aphasiology: if a particular language function is impaired when a specific region of the brain is knocked out, then that region must be somehow implicated in that language function. Few neurolinguistic studies to date have used TMS;[2]direct cortical stimulation andcortical recording(recording brain activity using electrodes placed directly on the brain) have been used withmacaque monkeysto make predictions about the behavior of human brains.[61]
In many neurolinguistics experiments, subjects do not simply sit and listen to or watchstimuli, but also are instructed to perform some sort of task in response to the stimuli.[62]Subjects perform these tasks while recordings (electrophysiological or hemodynamic) are being taken, usually in order to ensure that they are paying attention to the stimuli.[63]At least one study has suggested that the task the subject does has an effect on the brain responses and the results of the experiment.[64]
Thelexical decision taskinvolves subjects seeing or hearing an isolated word and answering whether or not it is a real word. It is frequently used inprimingstudies, since subjects are known to make a lexical decision more quickly if a word has been primed by a related word (as in "doctor" priming "nurse").[53][54][55]
Many studies, especially violation-based studies, have subjects make a decision about the "acceptability" (usuallygrammatical acceptabilityorsemanticacceptability) of stimuli.[64][65][66][67][68]Such a task is often used to "ensure that subjects [are] reading the sentences attentively and that they [distinguish] acceptable from unacceptable sentences in the way the [experimenter] expect[s] them to do."[66]
Experimental evidence has shown that the instructions given to subjects in an acceptability judgment task can influence the subjects' brain responses to stimuli. One experiment showed that when subjects were instructed to judge the "acceptability" of sentences they did not show anN400brain response (a response commonly associated withsemanticprocessing), but that they did show that response when instructed to ignore grammatical acceptability and only judge whether or not the sentences "made sense".[64]
Some studies use a "probe verification" task rather than an overt acceptability judgment; in this paradigm, each experimental sentence is followed by a "probe word", and subjects must answer whether or not the probe word had appeared in the sentence.[55][66]This task, like the acceptability judgment task, ensures that subjects are reading or listening attentively, but may avoid some of the additional processing demands of acceptability judgments, and may be used no matter what type of violation is being presented in the study.[55]
Subjects may be instructed not to judge whether or not the sentence is grammatically acceptable or logical, but whether thepropositionexpressed by the sentence is true or false. This task is commonly used in psycholinguistic studies of child language.[69][70]
Some experiments give subjects a "distractor" task to ensure that subjects are not consciously paying attention to the experimental stimuli; this may be done to test whether a certain computation in the brain is carried out automatically, regardless of whether the subject devotesattentional resourcesto it. For example, one study had subjects listen to non-linguistic tones (long beeps and buzzes) in one ear and speech in the other ear, and instructed subjects to press a button when they perceived a change in the tone; this supposedly caused subjects not to pay explicit attention to grammatical violations in the speech stimuli. The subjects showed amismatch response(MMN) anyway, suggesting that the processing of the grammatical errors was happening automatically, regardless of attention[37]—or at least that subjects were unable to consciously separate their attention from the speech stimuli.
Another related form of experiment is the double-task experiment, in which a subject must perform an extra task (such as sequential finger-tapping or articulating nonsense syllables) while responding to linguistic stimuli; this kind of experiment has been used to investigate the use ofworking memoryin language processing.[71]
Some relevant journals include theJournal of NeurolinguisticsandBrain and Language. Both are subscription access journals, though some abstracts may be generally available.
|
https://en.wikipedia.org/wiki/Neurolinguistics
|
Linguistic predictionis a phenomenon inpsycholinguisticsoccurring whenever information about a word or other linguistic unit is activated before that unit is actually encountered. Evidence fromeyetracking,event-related potentials, and other experimental methods indicates that in addition to integrating each subsequent word into the context formed by previously encountered words, language users may, under certain conditions, try to predict upcoming words.
In particular, prediction seems to occur regularly when the context of a sentence greatly limits the possible words that have not yet been revealed. For instance, a person listening to a sentence like, "In the summer it is hot, and in the winter it is..." would be highly likely to predict the sentence completion "cold" in advance of actually hearing it. A form of prediction is also thought to occur in some types oflexical priming, a phenomenon whereby a word becomes easier to process if it is preceded by a related word.[1]Linguistic prediction is an active area of research inpsycholinguisticsandcognitive neuroscience.
In theeyetrackingvisual world paradigm, experimental subjects listen to a sentence while staring at an array of pictures on a computer monitor. Theireye movementsare recorded, allowing the experimenter to understand how language influences eye movements toward pictures related to the content of the sentence. Experiments of this type have shown that while listening to the verb in a sentence, comprehenders anticipatorily move their eyes to the picture of the verb's likelydirect object(e.g. "cake" rather than "ball" while hearing, "The boy will eat...").[2]Subsequent investigations using the same experimental setup showed that the verb'ssubjectcan also determine which object comprehenders anticipate (e.g., comprehenders look at the merry-go-round rather than the motorcycle while hearing, "The little girl will ride...").[3]In short, comprehenders use the information in the sentence context to predict the meanings of upcoming words. In these experiments, comprehenders used the verb and its subject to activate information about the verb's direct object before hearing that word. However, another experiment has shown that in a language with more flexible word order (German), comprehenders can also use context to predict the sentence's subject.[4]
Eyetracking technology has also been used to monitor readers'eye movementswhile theyreadtext on a computer screen. Data from this kind of experiment has supported the hypothesis that readers use contextual information to predict upcoming words during natural reading. Specifically, readersfixatetheir eyes on a word for a shorter time when the word occurs in a moderately or highly constraining context, compared to the same word in an unconstrained context. This is true regardless of the word'sfrequencyor length. Readers are also more likely to skip over a word in a highly constraining context only.[5]Subsequent investigations of reading in theChinese logographic scripthave shown that despite the large differences between the Chinese and English orthographies, readers exploit contextual information for prediction in similar ways, with the exception that Chinese readers were more likely to skip words in moderately constraining contexts.[6]
Computational modelsof eye movements during reading, which model data related to word predictability, include Reichle and colleagues' E-Z Reader model[7]and Engbert and colleagues' SWIFT model.[8]
TheM100discussed here is the magnetic equivalent of thevisual N1potential—an event-related potential linked to visual processing and attention. The M100 was also linked to prediction in language comprehension in a series of event-relatedmagnetoencephalography(MEG) experiments. In these experiments, participants read words whose visual forms were either predictable or unpredictable based on prior linguistic context[9][10]or based on a recently seen picture.[11]The predictability of the word's visual form (but not the predictability of its meaning) affected the amplitude of the M100.
There is ongoing controversy about whether this M100 effect is related to theearly left anterior negativity(eLAN), an event-related potential response to words that is theorized to reflect the brain's assignment of localphrase structure.[12]
TheP2component is generally thought to reflect higher-order perceptual processing and its modulation by attention. However, it has also been linked to prediction of visual word forms. The P2 response to words in highly constraining contexts is often larger than the P2 response to words in less constraining contexts. When experimental participants read words that are presented to the left or right of their visual fixation (stimulating the oppositehemisphereof the brain first), the larger P2 for words in highly constraining contexts is observed only for right visual field presentation (targeting left hemisphere).[13]This is consistent with the PARLO hypothesis that linguistic prediction is mainly a function of the left hemisphere, discussed below.
TheN400is part of the normal ERP response to potentially meaningful stimuli, whose amplitude is inversely correlated with the predictability of a stimuli in a particular context.[14]In sentence processing, the predictability of a word is established by two related factors: 'cloze probability' and 'sentential constraint'.Clozeprobability reflects the expectancy of a target word given the context of the sentence, which is determined by the percentage of individuals who supply the word when completing a sentence whose final word is missing. Kutas and colleagues found that the N400 to sentences final words with cloze probability of 90% was smaller (i.e., more positive) than the N400 for words with cloze probability of 70%, which was then smaller for words with cloze probability of 30%.
Closely related, sentential constraint reflects the degree to which the context of the sentence constrains the number of acceptable continuations. Whereas cloze probability is the percent of individuals who choose a particular word, constraint is the number of different words chosen by a representative sample of individuals. Although words that are not predicted elicit a larger N400, the N400 to unpredicted words that aresemanticallyrelated to the predicted word elicit a smaller N400 than when the unpredicted words are semantically unrelated. When the sentence context is highly constraining, semantically related words receive further facilitation in that the N400 to semantically related words is smaller in high constraint sentences than in low constraint sentences.[15][16][17]Evidence for the prediction of specific words comes from a study by DeLong et al.[18]DeLong and colleagues took advantage of the use of differentindefinite articles, 'A' and 'AN' for English words that begin with a consonant or vowel respectively. They found that when the most probable sentence completion began with a consonant, the N400 was larger for 'AN' than for 'A' and vice versa, suggesting that prediction occurs at both a semantic and lexical level duringlanguage processing. (The study never replicated. In the most recent multi-lab attempt (335 participants), no evidence for word form prediction was found (Niewland et al., 2018).
TheP300, specifically theP3bis an ERP response to improbable stimuli and is sensitive to the subjective probability that a particular stimulus will occur. The P300 has been closely tied to context updating, which can be initiated by unexpected stimuli.[19]
TheP600an ERP response tosyntacticviolations, as well as complex, but error free, language.[20][21]A P600-like response is also observed forthematicallyimplausible sentences: example, "For breakfast, the eggs would only EAT toast and jam".[22]Both P600 responses are generally attributed to the process of revising or continuing the analysis of the sentence.[23]The syntactic P600 has been compared to the P300 in that both responses are sensitive to similar manipulations; importantly, the probability of the stimulus.[24]The similarity between the two responses may suggest that the P300 significantly contributes to the syntactic P600 response.
A late positivity is often observed subsequent to the N400. Recentmeta-analysisof the ERP literature on language processing has identified two different Post-N400 Positivities.[25]In comparing the Post-N400 Positivity (PNP) for congruent and incongruent sentence final words, a parietal PNP is observed for incongruent words. This parietal PNP is similar to the typical P600 response, suggesting continued or revised analysis. Within the congruent condition, when comparing high- and low-cloze probability sentence final words, a PNP response (if it is observed) is generally distributed across the front of the scalp. A recent study has shown that the frontal PNP may reflect processing an unexpected lexical item instead of an unexpected concept, suggesting that the frontal PNP reflects disconfirmed lexical predictions.[25]
Functional magnetic resonance imaging(fMRI) is aneuroimagingtechnology that usesnuclear magnetic resonanceto measure blood oxygenation levels in the brain and spinal cord. Because neural activity affects blood flow, the pattern of thehemodynamic responseis thought to correspond closely to the pattern of neural activity. The fine spatial resolution afforded by fMRI allowscognitive neuroscientiststo see in detail which areas of the brain are activated in relation to an experimental task. However, the hemodynamic response is much slower than the neural activity measured byEEGandMEG. This poor sensitivity to timing information makes fMRI a less useful technique than EEG oreyetrackingfor studying linguistic prediction.
One exception is an fMRI test of the differences in neural activation between strategic and automaticsemantic priming. When the time between the prime and the target word is short (around 150 milliseconds), priming is theorized to rely on automatic neural processes. However, at longer time intervals (approaching 1 second), it is thought that experimental subjects strategically predict related upcoming words and suppress unrelated words, leading to a processing penalty in the event that an unrelated word actually occurs.[1]An fMRI test of this hypothesis showed that at longer intervals, the processing penalty for an incorrect prediction is related to heightened activity in theanterior cingulate gyrusandBroca's area.[26]
The PARLO ("Production Affects Reception in Left Only") framework is a theory of the neural domains supporting language prediction. It is based on evidence that shows that the left and right hemispheres differentially contribute to language comprehension.[17]Generally, the neural structures that supportlanguage productionare predominantly in the left hemisphere for most individuals creating ahemispheric asymmetry, which results in differential language processing abilities of the two hemispheres. Because of its spatially close ties and integration with language production, left hemisphere language comprehension seems to be driven by expectancy and context in atop-downmanner, whereas the right hemisphere seems to integrate information in abottom-upmanner.[17]The PARLO framework suggests that both prediction and integration occur during language processing but rely on the distinct contributions of the two hemispheres of the brain.
The surprisal theory is a theory ofsentence processingbased oninformation theory.[27]In the surprisal theory, the cost of processing a word is determined by itsself-information, or how predictable the word is, given its context. A highly probable word carries a small amount of self-information and would therefore be processed easily, as measured by reducedreaction time, a smaller N400 response, or reduced fixation times in an eyetracking reading study. Empirical tests of this theory have shown a high degree of match between processing cost measures and the self-information values assigned to words.[28][29]
|
https://en.wikipedia.org/wiki/Prediction_in_language_comprehension
|
Psycholinguisticsorpsychology of languageis the study of the interrelation between linguistic factors and psychological aspects.[1]The discipline is mainly concerned with the mechanisms by which language is processed and represented in the mind and brain; that is, thepsychologicalandneurobiologicalfactors that enablehumansto acquire, use, comprehend, and producelanguage.[2]
Psycholinguistics is concerned with the cognitive faculties and processes that are necessary to produce the grammatical constructions of language. It is also concerned with the perception of these constructions by a listener.
Initial forays into psycholinguistics were in the philosophical and educational fields, mainly due to their location in departments other thanapplied sciences(e.g., cohesive data on how thehuman brainfunctioned). Modern research makes use ofbiology,neuroscience,cognitive science,linguistics, andinformation scienceto study how the mind-brain processes language, and less so the known processes ofsocial sciences,human development, communication theories, andinfant development, among others.
There are several subdisciplines with non-invasive techniques for studying the neurological workings of the brain. For example,neurolinguisticshas become a field in its own right, anddevelopmental psycholinguistics, as a branch of psycholinguistics, concerns itself with a child's ability to learn language.
Psycholinguistics is an interdisciplinary field that consists of researchers from a variety of different backgrounds, includingpsychology,cognitive science,linguistics,speech and language pathology, anddiscourse analysis. Psycholinguists study how people acquire and use language, according to the following main ways:
A researcher interested in language comprehension may studywordrecognition duringreading, to examine the processes involved in the extraction oforthographic,morphological,phonological, andsemanticinformation from patterns in printed text. A researcher interested in language production might study how words are prepared to be spoken starting from the conceptual or semantic level (this concerns connotation, and possibly can be examined through the conceptual framework concerned with thesemantic differential).Developmental psycholinguistsstudy infants' and children's ability to learn and process language.[3]
Psycholinguistics further divide their studies according to the different components that make up humanlanguage.
Linguistics-related areas include:
In seeking to understand the properties of language acquisition, psycholinguistics has roots in debates regarding innate versus acquired behaviors (both in biology and psychology). For some time, the concept of an innate trait was something that was not recognized in studying the psychology of the individual.[4]However, with the redefinition of innateness as time progressed, behaviors considered innate could once again be analyzed as behaviors that interacted with the psychological aspect of an individual. After the diminished popularity of thebehavioristmodel,ethologyreemerged as a leading train of thought within psychology, allowing the subject of language, aninnate human behavior, to be examined once more within the scope of psychology.[4]
The theoretical framework for psycholinguistics ostensibly began to be developed near the end of the 19th century as the "psychology of language". The work ofEdward ThorndikeandFrederic Bartlettlaid the foundations[citation needed]of what would come to be known[according to whom?]as "psycholinguistics."
The use of the term "psycholinguistic" is first encountered inadjectiveform in psychologistJacob Kantor1936 bookAn Objective Psychology of Grammar.[5]: 260
The term "psycholinguistics" came into wider usage in 1946 when Kantor's student Nicholas Pronko published an article entitled "Psycholinguistics: A Review".[6]Pronko's intention was to unify related theoretical approaches under a single name.[5][6]The term was used for the first time to talk about an interdisciplinary field "that could be coherent",[5]inCharles E. OsgoodandThomas A. Sebeok'sPsycholinguistics: A Survey of Theory and Research Problems(1954).[7]: 1679–1692
Though there is still much debate, there are two primary theories on childhood language acquisition:
The innatist perspective began in 1959 withNoam Chomsky's highly critical review ofB.F. Skinner'sVerbal Behavior(1957).[8]This review helped start what has been called thecognitive revolutionin psychology. Chomsky posited that humans possess a special, innate ability for language, and thatcomplex syntactic features, such asrecursion, are "hard-wired" in the brain. These abilities are thought to be beyond the grasp of even the most intelligent and social non-humans. When Chomsky asserted that children acquiring a language have a vast search space to explore among all possible human grammars, there was no evidence that children receivedsufficient input to learnall the rules of their language. Hence, there must be some other innate mechanism that endows humans with the ability to learn language. According to the "innateness hypothesis", such a language faculty is what defines human language and makes that faculty different from even the most sophisticated forms of animal communication.
The field of linguistics and psycholinguistics has since been defined by pro-and-con reactions to Chomsky. The view in favor of Chomsky still holds that the human ability to use language (specifically the ability to use recursion) is qualitatively different from any sort of animal ability.[9]
The view that language must be learned was especially popular before 1960 and is well represented by thementalistictheories ofJean Piagetand the empiricistRudolf Carnap. Likewise, the behaviorist school of psychology puts forth the point of view that language is a behavior shaped by conditioned response; hence it is learned. The view that language can be learned has had a recent resurgence inspired byemergentism. This view challenges the "innate" view as scientificallyunfalsifiable; that is to say, it cannot be tested. With the increase in computer technology since the 1980s, researchers have been able to simulate language acquisition using neural network models.[10]
The structures and uses of language are related to the formation of ontological insights.[11]Some see this system as "structured cooperation between language-users" who use conceptual andsemantic differencein order to exchange meaning and knowledge, as well as give meaning to language, thereby examining and describing "semantic processes bound by a 'stopping' constraint which are not cases of ordinary deferring." Deferring is normally done for a reason, and a rational person is always disposed to defer if there is good reason.[12]
The theory of the "semantic differential" supposes universal distinctions, such as:[13]
One question in the realm of language comprehension is how people understand sentences as they read (i.e.,sentence processing). Experimental research has spawned several theories about the architecture and mechanisms of sentence comprehension. These theories are typically concerned with the types of information, contained in the sentence, that the reader can use to build meaning and the point at which that information becomes available to the reader. Issues such as "modular" versus "interactive" processing have been theoretical divides in the field.
A modular view of sentence processing assumes that the stages involved in reading a sentence function independently as separate modules. These modules have limited interaction with one another. For example, one influential theory of sentence processing, the "garden-path theory", states that syntactic analysis takes place first. Under this theory, as the reader is reading a sentence, he or she creates the simplest structure possible, to minimize effort and cognitive load.[14]This is done without any input fromsemantic analysisor context-dependent information. Hence, in the sentence "The evidence examined by the lawyer turned out to be unreliable", by the time the reader gets to the word "examined" he or she has committed to a reading of the sentence in which the evidence is examining something because it is the simplest parsing. This commitment is made even though it results in an implausible situation: evidence cannot examine something. Under this "syntax first" theory, semantic information is processed at a later stage. It is only later that the reader will recognize that he or she needs to revise the initial parsing into one in which "the evidence" is being examined. In this example, readers typically recognize their mistake by the time they reach "by the lawyer" and must go back and reevaluate the sentence.[15]This reanalysis is costly and contributes to slower reading times. A 2024 study found that during self-paced reading tasks, participants progressively read faster and recalled information more accurately, suggesting that task adaptation is driven by learning processes rather than by declining motivation.[16]
In contrast to the modular view, an interactive theory of sentence processing, such as aconstraint-basedlexical approach assumes that all available information contained within a sentence can be processed at any time.[17]Under an interactive view, the semantics of a sentence (such as plausibility) can come into play early on to help determine the structure of a sentence. Hence, in the sentence above, the reader would be able to make use of plausibility information in order to assume that "the evidence" is being examined instead of doing the examining. There are data to support both modular and interactive views; which view is correct is debatable.
When reading,saccadescan cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely omits it from the sentence or supplies the wrong word in its stead. This can be seen in "Paris in thethe Spring". This is a common psychological test, where the mind will often skip the second "the", especially when there is a line break in between the two.[18]
Language production refers to how people produce language, either in written or spoken form, in a way that conveys meanings comprehensible to others. One of the most effective ways to explain the way people represent meanings using rule-governed languages is by observing and analyzing instances ofspeech errors, which include speech disfluencies like false starts, repetition, reformulation and constant pauses in between words or sentences, as well as slips of the tongue, like-blendings, substitutions, exchanges (e.g.Spoonerism), and various pronunciation errors.
These speech errors have significant implications for understanding how language is produced, in that they reflect that:[19]
It is useful to differentiate between three separate phases of language production:[20]
Psycholinguistic research has largely concerned itself with the study of formulation because the conceptualization phase remains largely elusive and mysterious.[20]
Linguistic relativity, often associated with the Sapir-Whorf hypothesis, posits that the structure of a language influences cognitive processes and world perception. While early formulations of this idea were largely speculative, modern psycholinguistic research has reframed it as a testable hypothesis within the broader study of language and thought.
Contemporary approaches to linguistic relativity are often discussed into following perspectives:
A key refinement of linguistic relativity is Slobin’s (1996) "Thinking for Speaking" hypothesis, which argues that language influences cognition most strongly when individuals prepare to communicate. Unlike traditional views of linguistic relativity, which suggest that language passively shapes thought, "Thinking for Speaking" proposes that speakers actively engage with linguistic categories and structures while constructing utterances.[23]
From a psycholinguistic standpoint, research on linguistic relativity intersects with conceptual representations, perceptual learning, and cognitive flexibility. Experimental studies have tested these ideas by examining how speakers of different languages categorize the world differently. For instance, cross-linguistic comparisons in spatial cognition reveal that languages with absolute spatial frames (e.g., Guugu Yimithirr) encourage speakers to encode space differently than languages with relative spatial frames (e.g., English).[21]
In the domain of bilingual cognition, psycholinguistic research suggests that bilinguals may experience cognitive restructuring, where language context modulates perception and categorization. Recent studies indicate that bilinguals can flexibly switch between different conceptual systems, depending on the language they are using, particularly in domains such as motion perception, event construal, and time perception.[24]
Overall, linguistic relativity in psycholinguistics is no longer seen as a rigid determinism of thought by language, but rather as a gradual, experience-based modulation of cognition by linguistic structures. This perspective has led to a shift from a purely linguistic hypothesis to an integrative cognitive science framework incorporating evidence from experimental psychology, neuroscience, and computational modeling.[25]
Many of the experiments conducted in psycholinguistics, especially early on, are behavioral in nature. In these types of studies, subjects are presented with linguistic stimuli and asked to respond. For example, they may be asked to make a judgment about a word (lexical decision), reproduce the stimulus, or say a visually presented word aloud. Reaction times to respond to the stimuli (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance in behavioral tasks. Such experiments often take advantage ofpriming effects, whereby a "priming" word or phrase appearing in the experiment can speed up the lexical decision for a related "target" word later.[26]
As an example of how behavioral methods can be used in psycholinguistics research, Fischler (1977) investigated word encoding, using a lexical-decision task.[27]He asked participants to make decisions about whether two strings of letters were English words. Sometimes the strings would be actual English words requiring a "yes" response, and other times they would be non-words requiring a "no" response. A subset of the licit words were related semantically (e.g., cat–dog) while others were unrelated (e.g., bread–stem). Fischler found that related word pairs were responded to faster, compared to unrelated word pairs, which suggests that semantic relatedness can facilitate word encoding.[27]
Recently,eye trackinghas been used to study onlinelanguage processing. Beginning with Rayner (1978), the importance of understanding eye-movements during reading was established.[28]Later, Tanenhaus et al. (1995) used a visual-world paradigm to study the cognitive processes related to spoken language.[29]Assuming that eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is listening to spoken language.
Theanalysisof systematicerrors in speech, as well as the writing andtypingof language, can provide evidence of the process that has generated it. Errors of speech, in particular, grant insight into how the mind produces language while a speaker is mid-utterance. Speech errors tend to occur in thelexical,morpheme, andphonemeencoding steps of language production, as seen by the ways errors can manifest themselves.[30]
The types of speech errors, with some examples, include:[30][31][32]
Speech errors will usually occur in the stages that involve lexical, morpheme, or phoneme encoding, and usually not in the first step ofsemantic encoding.[33]This can be attributed to a speaker still conjuring the idea of what to say; and unless he changes his mind, can not be mistaken for what he wanted to say.
Until the recent advent ofnon-invasivemedical techniques, brain surgery was the preferred way for language researchers to discover how language affects the brain. For example, severing thecorpus callosum(the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms ofepilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. When an illness made brain surgery necessary, language researchers had an opportunity to pursue their research.
Newer, non-invasive techniques now include brain imaging bypositron emission tomography(PET);functional magnetic resonance imaging(fMRI);event-related potentials(ERPs) inelectroencephalography(EEG) andmagnetoencephalography(MEG); andtranscranial magnetic stimulation(TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each methodology has advantages and disadvantages for the study of psycholinguistics.[34]
Computational modelling, such as theDRC modelof reading and word recognition proposed byMax Coltheartand colleagues,[35]is another methodology, which refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex thatdiscursive analysisis unreliable. Other examples of computational modelling areMcClellandandElman'sTRACEmodel ofspeech perception[36]and Franklin Chang's Dual-Path model of sentence production.[37]
The psychophysical approach in psycholinguistics applies quantitative measurement techniques to investigate how linguistic structures influence perception and cognitive processes. Unlike traditional behavioral experiments that rely on categorical judgments or reaction times, psychophysical methods allow for precise, continuous measurement of perceptual and cognitive changes induced by language.
A key advantage of psychophysical methods is their ability to capture fine-grained perceptual effects of language. For instance, studies on color perception have used just-noticeable difference (JND) thresholds to show that speakers of languages with finer color distinctions (e.g., Russian for light vs. dark blue) exhibit heightened perceptual sensitivity at linguistic category boundaries.[22]
Recent psychophysical research has also been applied to time perception, investigating how bilinguals process temporal information differently based on their linguistic background. Using psychophysical duration estimation tasks, researchers have demonstrated that bilinguals may exhibit different time perception patterns depending on which language they are using at the moment.[24]
These methods provide insights into how linguistic categories shape cognitive processing at a perceptual level, distinguishing between effects that arise from language structure itself and those that emerge from general cognitive mechanisms. As psycholinguistics continues to integrate computational and neuroscientific approaches, psychophysical techniques offer a bridge between language processing and sensory cognition, refining our understanding of how language interacts with perception.
Psycholinguistics is concerned with the nature of the processes that the brain undergoes in order to comprehend and produce language. For example, thecohort modelseeks to describe how words are retrieved from themental lexiconwhen an individual hears or sees linguistic input.[26][38]Using newnon-invasiveimaging techniques, recent research seeks to shed light on the areas of the brain involved in language processing.
Another unanswered question in psycholinguistics is whether the human ability to use syntax originates from innate mental structures or social interaction, and whether or not some animals can be taught the syntax of human language.
Two other major subfields of psycholinguistics investigatefirst language acquisition, the process by which infants acquire language, andsecond language acquisition. It is much more difficult for adults to acquiresecond languagesthan it is for infants to learn their first language (infants are able to learn more than one native language easily). Thus,sensitive periodsmay exist during which language can be learned readily.[39]A great deal of research in psycholinguistics focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one knows, the easier it is to learn more.[40]
The field ofaphasiologydeals with language deficits that arise because of brain damage. Studies in aphasiology can offer both advances in therapy for individuals suffering from aphasia and further insight into how the brain processes language.
A short list of books that deal with psycholinguistics, written in language accessible to the non-expert, includes:
|
https://en.wikipedia.org/wiki/Psycholinguistics
|
Readingis the process of taking in the sense or meaning ofsymbols, often specifically those of awrittenlanguage, by means ofsightortouch.[1][2][3][4]
For educators andresearchers, reading is a multifaceted process involving such areas as word recognition,orthography(spelling),alphabetics,phonics,phonemic awareness, vocabulary, comprehension, fluency, and motivation.[5][6]
Other types of reading and writing, such aspictograms(e.g., ahazard symboland anemoji), are not based on speech-basedwriting systems.[7]The common link is the interpretation of symbols to extract the meaning from the visual notations or tactile signals (as in the case ofbraille).[8]
Reading is generally an individual activity, done silently, although on occasion a person reads out loud for other listeners; or reads aloud for one's own use, for better comprehension. Before the reintroduction ofseparated text(spaces between words) in the lateMiddle Ages, the ability to read silently was considered rather remarkable.[10][11]
Major predictors of an individual's ability to read both alphabetic and non-alphabetic scripts are oral language skills,[12]phonological awareness,rapid automatized namingandverbal IQ.[13]
As aleisure activity, children and adults read because it is enjoyable and interesting. In the US, about half of all adults read one or more books for pleasure each year.[14]About 5% read more than 50 books per year.[14]Americans read more if they: have more education, read fluently and easily, are female, live in cities, and have highersocioeconomic status.[14]Children become better readers when they know more about the world in general, and when they perceive reading as fun rather than as a chore to be performed.[14]
Reading is an essential part ofliteracy, yet from a historical perspective literacy is about having the ability to both read and write.[15][16][17][18]
Since the 1990s, some organizations have defined literacy in a wide variety of ways that may go beyond the traditional ability to read and write. The following are some examples:
In the academic field, some view literacy in a more philosophical manner and propose the concept of "multiliteracies". For example, they say, "this huge shift from traditional print-based literacy to 21st century multiliteracies reflects the impact of communication technologies and multimedia on the evolving nature of texts, as well as the skills and dispositions associated with the consumption, production, evaluation, and distribution of those texts (Borsheim, Meritt, & Reed, 2008, p. 87)".[30][31]According to cognitive neuroscientistMark Seidenbergthese "multiple literacies" have allowed educators to change the topic from reading and writing to "Literacy". He goes on to say that some educators, when faced with criticisms of how reading is taught, "didn't alter their practices, they changed the subject".[32]
Also, some organizations might include numeracy skills and technology skills separately but alongside of literacy skills.[33]
In addition, since the 1940s the term literacy is often used to mean having knowledge or skill in a particular field (e.g.,computer literacy,ecological literacy,health literacy,media literacy, quantitative literacy (numeracy)[29]andvisual literacy).[34][35][36][37]
In order to understand a text, it is usually necessary to understand the spoken language associated with that text. In this way, writing systems are distinguished from many other symbolic communication systems.[38]Once established, writing systems on the whole change more slowly than their spoken counterparts, and often preserve features and expressions which are no longer current in the spoken language. The great benefit of writing systems is their ability to maintain a persistent record of information expressed in a language, which can be retrieved independently of the initial act of formulation.[38]
Reading for pleasure has been linked to increased cognitive progress in vocabulary and mathematics during adolescence.[39][40][41]Sustained high volume lifetime reading has been associated with high levels of academic attainment.[42]
Research suggests that reading can improve stress management,[43]memory,[43]focus,[44]writing skills,[44]andimagination.[45]
The cognitive benefits of reading continue into mid-life and the senior years.[46][47][48]
Research suggests that reading books and writing are among the brain-stimulating activities that can slow down cognitive decline in seniors.[49]
Reading has been the subject of considerable research and reporting for decades. Many organizations measure and report onreading achievementfor children and adults (e.g.,NAEP,PIRLS,PISAPIAAC, andEQAO).
Researchers have concluded that approximately 95% of students can be taught to read by the end of the first or second year of school, yet in many countries 20% or more do not meet that expectation.[50][51]
A 2012 study in the U.S. found that 33% of grade three children had low reading scores – however, they comprised 63% of the children who did not graduate from high school. Poverty also had an additional negative impact on high school graduation rates.[52]
According to the 2019Nation's Report card, 34% of grade four students in the United States failed to perform at or above theBasic reading level. There was a significant difference by race and ethnicity (e.g., black students at 52% and white students at 23%). After the impact of theCOVID-19 pandemicthe average basic reading score dropped by 3% in 2022.[53]See more aboutthe breakdown by ethnicity in 2019 and 2022 here. In 2022, 30% of grade eight students failed to perform at or above the NAEP Basic level, which was 3 points lower compared to 2019.[54]According to a 2023 study in California, only 46.6% of grade three students achieved the English reading standards.[55][56]Another report states that many teenagers who've spent time in California's juvenile detention facilities get high school diplomas with grade-school reading skills. "There are kids getting their high school diplomas who aren't able to even read and write." During a five-year span beginning in 2018, 85% of these students who graduated from high school did not pass a 12th-grade reading assessment.[57]
Between 2013 and 2024, 37 US States passed laws or implemented new policies related to evidence-based reading instruction.[58]In 2023, New York City set about to require schools to teach reading with an emphasis onphonics. In that city, less than half of the students from the third grade to the eighth grade of school scored as proficient on state reading exams. More than 63% of Black and Hispanic test-takers did not make the grade.[59]
Globally, theCOVID-19 pandemiccreated a substantial overall learning deficit in reading abilities and other academic areas. It arose early in the pandemic and persists over time, and is particularly large among children from low socio-economic backgrounds.[60][61]In the US, several research studies show that, in the absence of additional support, there is nearly a 90 percent chance that a poor reader in Grade 1 will remain a poor reader.[62]
In Canada, the province ofOntarioreported that 27% of grade three students did not meet the provincial reading standards in 2023.[63]Also in Ontario, 53% of grade three students with special education needs (students who have an Individual Education Plan), were not meeting the provincial standards in 2022.[64]The province ofNova Scotiareported that 32% of grade three students did not meet the provincial reading standards in 2022.[65]The province ofNew Brunswickreported that 43.4% and 30.7% did not meet the Reading Comprehension Achievement Levels for grades four and six respectively in 2023.[66]
The Progress in International Reading Literacy Study (PIRLS) publishes reading achievement for fourth graders in 50 countries.[67]The five countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland and Finland. Some others are: England 10th, United States 15th, Australia 21st, Canada 23rd, and New Zealand 33rd.[68][69][70]
The Programme for International Student Assessment (PISA) measures 15-year-old school pupils scholastic performance on mathematics, science, and reading.[71]Critics, however, say PISA is fundamentally flawed in its underlying view of education, its implementation, and its interpretation and impact on education globally.[72][73][74]
The reading levels of adults, ages 16–65, in 39 countries are reported by theProgramme for the International Assessment of Adult Competencies(PIAAC).[75]Between 2011 and 2018, PIAAC reports the percentage of adults readingat-or-below level one(the lowest of five levels). Some examples are Japan 4.9%, Finland 10.6%, Netherlands 11.7%, Australia 12.6%, Sweden 13.3%, Canada 16.4%, England (UK) 16.4%, and the United States 16.9%.[76]
According to theWorld Bank, 53% of all children in low-and-middle-income countries suffer from 'learning poverty'. In 2019, using data from theUNESCOInstitute for Statistics, they published a report entitledEnding Learning Poverty: What will it take?.[77]Learning poverty is defined as being unable to read and understand a simple text by age 10.
Although they say that all foundational skills are important, include reading, numeracy, basic reasoning ability, socio-emotional skills, and others – they focus specifically on reading. Their reasoning is that reading proficiency is an easily understood metric of learning, reading is a student's gateway to learning in every other area, and reading proficiency can serve as a proxy for foundational learning in other subjects.
They suggest five pillars to reduce learning poverty:
Learning to readorreading skills acquisitionis the acquisition and practice of the skills necessary to understand the meaning behind printed words. For a skilled reader, the act of reading feels simple, effortless, and automatic.[78]However, the process of learning to read is complex and builds on cognitive, linguistic, and social skills developed from a very early age. As one of the four core language skills (listening, speaking, reading and writing),[79][80]reading is vital to gaining a command of written language.
In the United States and elsewhere, it is widely believed that students who lack proficiency in reading by the end of grade three may face obstacles for the rest of their academic career.[81][82][83]For example, it is estimated that they would not be able to read half of the material they will encounter in grade four.[84]
In 2019, among American fourth-graders in public schools, only 58% of Asian, 45% of Caucasian, 23% of Hispanic, and 18% of Black students performed at or above theproficient levelof theNation's Report Card.[85]Also, in 2012, in theUnited Kingdomit has been reported that 15-year-old students are reading at the level expected of 12-year-old students.[86]
As a result, many governments put practices in place to ensure that students are reading at grade level by the end of grade three. An example of this is the Third Grade Reading Guarantee created by the State ofOhioin 2017. This is a program to identify students from kindergarten through grade three that are behind in reading, and provide support to make sure they are on track for reading success by the end of grade three.[87][88]This is also known asremedial education. Another example is the policy in England whereby any pupil who is struggling to decode words properly by year three must "urgently" receive help through a "rigorous and systematic phonics programme".[89]
In 2016, out of 50 countries, the United States achieved the 15th highest score in grade-four reading ability.[90]The ten countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland, Finland, Poland, Northern Ireland, Norway, Chinese Taipei and England (UK). Some others are: Australia (21st), Canada (23rd), New Zealand (33rd), France (34th), Saudi Arabia (44th), and South Africa (50th).
Spoken language is the foundation of learning to read (long before children see any letters) and children's knowledge of the phonological structure of language is a good predictor of early reading ability. Spoken language is dominant for most of childhood; however, reading ultimately catches up and surpasses speech.[91][92][93][94]
By their first birthday most children have learned all the sounds in their spoken language. However, it takes longer for them to learn the phonological form of words and to begin developing a spoken vocabulary.[12]
Children acquire a spoken language in a few years. Five-to-six-year-old English learners have vocabularies of 2,500 to 5,000 words, and add 5,000 words per year for the first several years of schooling. This rapid learning rate cannot be accounted for by the instruction they receive. Instead, children learn that the meaning of a new word can be inferred because it occurs in the same context as familiar words (e.g.,lionis often seen withcowardlyandking).[95]As British linguistJohn Rupert Firthsays, "You shall know a word by the company it keeps".
The environment in which children live may also impact their ability to acquire reading skills. Children who are regularly exposed to chronic environmental noise pollution, such as highway traffic noise, have been known to show decreased ability to discriminate betweenphonemes(oral language sounds) as well as lower reading scores on standardized tests.[96]
Children learn to speak naturally – by listening to other people speak. However, reading is not a natural process, and many children need to learn to read through a process that involves "systematic guidance and feedback".[99][100][101][102]
So, "reading to children is not the same as teaching children to read".[103]Nonetheless, reading to children is important because it socializes them to the activity of reading; it engages them; it expands their knowledge of spoken language; and it enriches their linguistic ability by hearing new and novel words and grammatical structures.[41]
However, there is some evidence that "shared reading" with children does help to improve reading if the children's attention is directed to the words on the page as they are being read to.[97][98]
There is some debate as to the optimum age to teach children to read.
TheCommon Core State Standards Initiative(CCSS) in the United States has standards for foundational reading skills in kindergarten and grade one that include instruction in print concepts, phonological awareness, phonics, word recognition, and fluency.[104]However, some critics of CCSS say that "To achieve reading standards usually calls for long hours of drill and worksheets – and reduces other vital areas of learning such as math, science, social studies, art, music and creative play".[105]
ThePISA 2007OECD data from 54 countries demonstrates "no association between school entry age ... and reading achievement at age 15".[106]Also, a German study of 50 kindergartens compared children who, at age 5, had spent a year either "academically focused", or "play-arts focused" and found that in time the two groups became inseparable in reading skill.[107]The authors conclude that the effects of early reading are like "watering a garden before a rainstorm; the earlier watering is rendered undetectable by the rainstorm, the watering wastes precious water, and the watering detracts the gardener from other important preparatory groundwork".[106]
Some scholars favor adevelopmentally appropriate practice(DAP) in which formal instruction on reading begins when children are about six or seven years old. And to support that theory some point out that children inFinlandstart school at age seven (Finland ranked 5th in the 2016PIRLSinternational grade four reading achievement.)[108]In a discussion on academic kindergartens, professor of child developmentDavid Elkindhas argued that, since "there is no solid research demonstrating that early academic training is superior to (or worse than) the more traditional, hands-on model of early education", educators should defer to developmental approaches that provide young children with ample time and opportunity to explore the natural world on their own terms.[109]Elkind emphasized the principle that "early education must start with the child, not with the subject matter to be taught".[109]In response,Grover J. Whitehurst, Director, Brown Center on Education Policy, (part ofBrookings Institution)[110]said David Elkind is relying too much on philosophies of education rather than science and research. He continues to say education practices are "doomed to cycles of fad and fancy" until they become more based onevidence-based practice.[111]
On the subject of Finland's academic results, as some researchers point out, prior to starting school Finnish children must participate in one year of compulsory free pre-primary education and most are reading before they start school.[112][113]And, with respect todevelopmentally appropriate practice(DPA), in 2019 theNational Association for the Education of Young Children, Washington, D.C., released a draft position paper on DPA saying "The notion that young children are not ready for academic subject matter is a misunderstanding of developmentally appropriate practice; particularly in grades 1 through 3, almost all subject matter can be taught in ways that are meaningful and engaging for each child".[114]And, researchers atThe Institutes for the Achievement of Human Potentialsay it is a myth that early readers are bored or become trouble makers in school.[115]
Other researchers and educators favor limited amounts of literacy instruction at the age of four and five, in addition to non-academic, intellectually stimulating activities.[116]
Reviews of the academic literature by theEducation Endowment Foundationin the UK have found that starting literacy teaching in preschool has "been consistently found to have a positive effect on early learning outcomes"[117]and that "beginning early years education at a younger age appears to have a high positive impact on learning outcomes".[118]This supports current standard practice in the UK which includes developing children's phonemic awareness in preschool and teaching reading from age four.
A study inChicagoreports that anearly education programfor children from low-income families is estimated to generate $4 to $11 of economic benefits over a child's lifetime for every dollar spent initially on the program, according to a cost-benefit analysis funded by theNational Institutes of Health. The program is staffed by certified teachers and offers "instruction in reading and math, group activities and educational field trips for children ages 3 through 9".[119][120]
There does not appear to be any definitive research about the "magic window" to begin reading instruction.[113]However, there is also no definitive research to suggest that starting early causes any harm.Researcher and educator Timothy Shanahan, suggests, "Start teaching reading from the time you have kids available to teach, and pay attention to how they respond to this instruction – both in terms of how well they are learning what you are teaching, and how happy and invested they seem to be. If you haven't started yet, don't feel guilty, just get going".[113]
Some education researchers suggest the teaching of the various reading components by specific grade levels.[121]The following is one example from Carol Tolman, Ed.D. and Louisa Moats, Ed.D. that corresponds in many respects with the United StatesCommon Core State Standards Initiative:[104]
The percentage of US students who failed to perform at or above theNations Report Cardbasic reading levelwere grade 4 (37% in 2022), grade 8 (30% in 2022), and grade 12 (30% in 2019).[122]As a result many secondary school teachers devote some class time to activities related to foundational reading skills.[123]
The following chart shows the percentage of K-12 English Language Arts teachers that engaged in foundational reading activities with students (i.e., engaging every student in a class in activities related to the foundational reading skills for more than a few minutes within the past five class lessons).[124]
Secondary ELA teachers in states with reading legislation were significantly more likely to report frequently engaging their students in these activities than secondary ELA teachers in states without such legislation, even though only one-quarter of states with these laws include requirements around secondary ELA instruction.[125]
The path to skilled reading involves learning thealphabetic principle,phonemic awareness,phonics, fluency, vocabulary and comprehension.[126]
British psychologistUta Frithintroduced a three-stage model to acquire skilled reading. Stage one is thelogographic or pictorial stagewhere students attempt to grasp words as objects, an artificial form of reading. Stage two is thephonological stagewhere students learn the relationship between the graphemes (letters) and the phonemes (sounds). Stage three is theorthographic stagewhere students read familiar words more quickly than unfamiliar words, and word length gradually ceases to play a role.[127][128]
Another recognized expert in this area isHarvardprofessorJeanne Sternlicht Chall. In 1983 she published a book entitledStages of Reading Developmentthat proposed six stages.[129][130]
Subsequently, in 2008Maryanne Wolf,UCLA Graduate School of Education and Information Studies, published a book entitledProust and the Squidin which she describes her view of the following five stages of reading development.[131][132]Normally, children will move through these stages at different rates; however, typical ages for children in the United States are shown below.
The emerging pre-reader stage, also known asreading readiness, usually lasts for the first five years of a child's life.[135]Children typically speak their first few words before their first birthday.[136]Educators and parents help learners to develop their skills in listening, speaking, reading and writing.[137]
Reading to children helps them to develop their vocabulary, a love of reading, andphonemic awareness, i.e. the ability to hear and manipulate the individual sounds (phonemes) of oral language. Children will often "read" stories they have memorized. However, in the late 1990s, United States' researchers found that the traditional way of reading to children made little difference in their later ability to read because children spend relatively little time actually looking at the text. Yet, in a shared reading program with four-year-old children, teachers found that directing children's attention to the letters and words (e.g. verbally or pointing to the words) made a significant difference in early reading, spelling and comprehension.[138][98][139][140]
Novice readers continue to develop their phonemic awareness, and come to realize that the letters (graphemes) connect to the sounds (phonemes) of the language; known as decoding,phonics, and thealphabetic principle.[141]They may also memorize the most common letter patterns and some of the high-frequency words that do not necessarily follow basic phonological rules (e.g.have and who). However, it is a mistake to assume a reader understands the meaning of a text merely because they can decode it. Vocabulary and oral language comprehension are also important parts of text comprehension as described in theSimple view of reading,Scarborough's reading rope, andThe active view of reading model. Reading and speech are codependent: reading promotes vocabulary development and a richer vocabulary facilitates skilled reading.[142]
The transition from the novice reader stage to the decoding stage is marked by a reduction of painful pronunciations and in its place the sounds of a smoother, more confident reader.[143]In this phase the reader adds at least 3,000 words to what they can decode. For example, in the English language, readers now learn the variations of the vowel-basedrimes(e.g. sat, mat, cat)[144]andvowelpairs (alsodigraph) (e.g. rain, play, boat)[145]
As readers move forward, they learn the makeup ofmorphemes(i.e. stems, roots,prefixesandsuffixes). They learn the common morphemes such as "s" and "ed" and see them as "sight chunks". "The faster a child can see thatbeheadedisbe + head + ed", the faster they will become a more fluent reader.
At the beginning of this stage, a child will often be devoting so much mental capacity to the process of decoding that they will have no understanding of the words being read. It is nevertheless an important stage, allowing the child to achieve their ultimate goal of becoming fluent and automatic.
It is in the decoding phase that the child will get to what the story is really about, and learn to re-read a passage when necessary to truly understand it.
The goal of this stage is to "go below the surface of the text", and in the process the reader will build their knowledge of spelling substantially.[146]
Teachers and parents may be tricked by fluent-sounding reading into thinking that a child understands everything that they are reading. As the content of what they can read becomes more demanding, good readers will develop knowledge offigurative languageandironywhich helps them to discover new meanings in the text.
Children improve their comprehension when they use a variety of tools such as connecting prior knowledge, predicting outcomes, drawing inferences, and monitoring gaps in their understanding. One of the most powerful moments is when fluent comprehending readers learn to enter into the lives of imagined heroes and heroines.
When teaching comprehension, theeducational psychologist,G. Michael Pressley, says a strong case can be made for instruction in decoding, vocabulary, word knowledge, active comprehension strategies, and self-monitoring.[147]
At the end of this stage, many processes are starting to become automatic, allowing the reader to focus on meaning. With the decoding process almost automatic by this point, the brain learns to integrate moremetaphorical, inferential,analogical, background, andexperiential knowledge. This stage in learning to read will often last until early adulthood.[148]
At the expert stage, it will usually only take a reader one-half second to read almost any word.[149]The degree to which expert reading will change throughout anadult'slife depends on what they read and how much they read.
Science of Reading (SOR) is an interdisciplinary body of scientifically-based research about reading.[153][154]Foundational skills such asphonics, decoding, andphonemic awarenessare considered to be important parts of the science of reading, but they are not the only ingredients. SOR includes anyresearch and evidenceabout how humans learn to read, and how reading should be taught. This includes areas such as oral reading fluency, vocabulary,morphology, reading comprehension, text, spelling and pronunciation, thinking strategies, oral language proficiency, working memory training, and written language performance (e.g., cohesion, sentence combining/reducing).[155]
In cognitive science, there is likely no area that has been more successful than the study of reading. Yet, in many countries reading levels are considered low. In the United States, the 2019Nation's Report Cardreported that 34% of grade-four public school students performed at or above theNAEPproficient level(solid academic performance) and 65% performed at or above thebasic level(partial mastery of the proficient level skills).[156]As reported in thePIRLSstudy, the United States ranked 15th out of 50 countries, for reading comprehension levels of fourth-graders.[68][69]In addition, according to the 2011–2018PIAACstudy, out of 39 countries the United States ranked 19th for literacy levels of adults 16 to 65; and 16.9% of adults in the United States read at or below level one (out of five levels).[157][76]
Many researchers are concerned that low reading levels are due to how reading is taught. They point to three areas:
The simple view of readingis a scientific theory about reading comprehension.[161]According to the theory, to comprehend what they are reading students need bothdecoding skillsandoral language (listening) comprehension ability. Neither is enough on their own.[162][163][164]
It is expressed in this equation:
Decoding × Oral Language Comprehension = Reading Comprehension.[165]
Hollis Scarboroughpublished the Reading Rope infographics in 2001 using strands of rope to illustrate the many ingredients involved in becoming a skilled reader. The upper strands representlanguage comprehensionand reinforce one another. The lower strands representword recognitionand work together as the reader becomes accurate, fluent, and automatic through practice. The upper and lower strands all weave together to produce a skilled reader.[166]
More recent research byLaurie E. Cuttingand Hollis S. Scarborough has highlighted the importance ofexecutive functionprocesses (e.g. working memory, planning, organization, self-monitoring, and similar abilities) to reading comprehension.[167][168]
The active view of reading (AVR) model (May 7, 2021),Nell K. Dukeand Kelly B. Cartwright,[169]offers an alternative to thesimple view of reading(SVR), and a proposed update toScarborough's reading rope(SRR). This model is more complete than the simple view of reading and does a better job of accommodating some of the knowledge about reading developed over the past several decades.
The following chart shows the ingredients in the authors' infographic. In addition, the authors point out that reading is also impacted by text, task, andsociocultural context.
In the field of psychology, automaticity is the ability to do things without occupying the mind with the low-level details required, allowing it to become an automatic response pattern ora habit. When reading is automatic, preciousworking memoryresources can be devoted to considering the meaning of a text, etc.
The unexpected finding from cognitive science is that practice doesnotmake perfect. For a new skill to become automatic, sustained practice beyond the point of mastery is necessary.[171][172]
Several researchers and neuroscientists have attempted to explain how the brain reads. They have written articles and books, and created websites and YouTube videos to help the average consumer.[173][174][175][176]
A study conducted at theMedical University of South Carolina(MUSC) in 2022 indicates that "greater left-brain asymmetry can predict both better and average performance on a foundational level of reading ability, depending on whether the analysis is conducted over the whole brain or in specific regions".[177][178]
Although it is not included in most meta-analytical studies, thesensorimotor cortexof the brain is the most active region of the brain during reading.[179]
Theoccipitalandparietal lobes, or more specificallyfusiform gyrus, include the brain'svisual word form area(VWFA).[180]
The two major regions of the brain associated with phonological skills are the temporal-parietal region and the Perisylvian Region.[181]
The Perisylvian Region, which is the portion of the brain believed to connect Broca's and Wernicke's area,[182]is another region that is highly active during phonological activities where participants are asked to verbalize known and unknown words.[183]
The inferior frontal region is a much more complex region of the brain, and its association with reading is not necessarily linear, for it is active in several reading-related activities.[184]
Thecerebellum, which is not a part of the cerebral cortex, is also believed to play an important role in reading.[185]
Reading is an intensive process in which the eye quickly moves to assimilate the text – seeing just accurately enough to interpret groups of symbols.[186]It is necessary to understandvisual perceptionandeye movement in readingto understand the reading process.
When reading, the eye moves continuously along a line of text but makes short rapid movements (saccades) intermingled with short stops (fixations). There is considerable variability in fixations (the point at which a saccade jumps to) and saccades between readers, and even for the same person reading a single passage of text. When reading, the eye has aperceptual spanof about 20 slots. In the best-case scenario and reading English, when the eye is fixated on a letter, four to five letters to the right and three to four letters to the left can be clearly identified. Beyond that, only the general shape of some letters can be identified.[187]
Research published in 2019 concluded that thesilent reading rateof adults in English fornon-fictionis in the range of 175 to 300words per minute(wpm), and forfictionthe range is 200 to 320 words per minute.[188][189]
In the early 1970s, thedual-route hypothesis to reading aloudwas proposed, according to which there are two separate mental mechanisms involved in reading aloud, with output from both contributing to the pronunciation of written words.[190][191][192]One mechanism is thelexicalroute whereby skilled readers can recognize a word as part of theirsight vocabulary. The other is thenonlexical or sublexicalroute, in which the reader "sounds out" (decodes) written words.[192][193]
There is robust evidence that saying a word out loud makes it more memorable than simply reading it silently or hearing someone else say it. This is because self-reference and self-control over speaking produce more engagement with the words. The memory benefit of "hearing oneself" is referred to asthe production effect.[194]
Evidence-based reading instruction refers to practices having research evidence showing their success in improving reading achievement.[195][196][197][198][199]It is related toevidence-based education.
A systematic review and meta‐analysis was conducted on the advantages of reading from paper vs. screens. It found no difference in reading times; however, reading from paper has a small advantage in reading performance andmetacognition.[200]
According to some researchers, having a highly qualified teacher in every classroom is an educational necessity, and a 2023 study of 512 classroom teachers in 112 schools showed that teachers' knowledge of language and literacy reliably predicted students' reading foundational skills scores, but not reading comprehension scores.[201]Yet, some teachers, even after obtaining a master's degree in education, think they lack the necessary knowledge and skills to teach all students how to read.[202]A 2019 survey of K-2 and special education teachers found that only 11 percent said they felt "completely prepared" to teach early reading after finishing their preservice programs. And, a 2021 study found that most U.S. states do not measure teachers' knowledge of the 'science of reading'.[203]
A survey in the United States reported that 70% of teachers believe in abalanced literacyapproach to teaching reading – however, balanced literacy "is not systematic, explicit instruction".[202]In an Education Week Research Center survey of more than 530 professors of reading instruction, only 22 percent said their philosophy of teaching early reading centered on explicit, systematic phonics with comprehension as a separate focus.[202]
As of October 2024, afterMississippibecame the only state to improve reading results between 2017 and 2019,[204]40 U.S. states and the District of Columbia have since passed laws or implemented new policies related to evidence-based reading instruction.[58]As a result, many schools are moving away from balanced literacy programs that encourage students to guess a word, and are introducing phonics where they learn to "decode" (sound out) words.[205]
As more state legislatures seek to passscience of readinglegislation, some teachers' unions are mounting opposition, citing concerns about mandates that would limit teachers' professional autonomy in the classroom, uneven implementation, unreasonable timelines, and the amount of time and compensation teachers receive for additional training.[206]
In 2021, theDepartment of Education and Early Childhood DevelopmentofNew Brunswickappears to be the first in Canada to revise its K-2 reading curriculum based on "research-based instructional practice". For example, it replaced the various cueing systems with "mastery in the consolidated alphabetic to skilled reader phase".[207][208]
Some non-profit organizations, such as the Center for Development and Learning (Louisiana) and the Reading League (New York State), offer training programs for teachers to learn about the science of reading.[209][210][211][212]
Educators have debated for years about which method is best to teach reading for the English language. There are three main methods,phonics,whole languageandbalanced literacy. There are also a variety of other areas and practices such asphonemic awareness, fluency, reading comprehension, sight words and sight vocabulary, the three-cueing system (the searchlights model in England),guided reading,shared reading, andleveled reading. Each practice is employed in different manners depending on the country and the specific school division.
In 2001, some researchers reached two conclusions: 1) "mastering the alphabetic principle is essential" and 2) "instructional techniques (namely, phonics) that teach this principle directly are more effective than those that do not". However, while they make it clear they have some fundamental disagreements with some of the claims made by whole-language advocates, some principles of whole-language have value such as the need to ensure that students are enthusiastic about books and eager to learn to read.[78]
Phonicsemphasizes thealphabetic principle– the idea that letters (graphemes) represent the sounds of speech (phonemes).[215]It is taught in a variety of ways; some are systematic and others are unsystematic. Unsystematic phonics teaches phonics on a "when needed" basis and in no particular sequence.Systematicphonicsuses a planned, sequential introduction of a set of phonic elements along withexplicitteaching and practice of those elements. TheNational Reading Panel(NRP) concluded that systematic phonics instruction is more effective than unsystematic phonics or non-phonics instruction.
Phonics approaches include analogy phonics, analytic phonics, embedded phonics with mini-lessons, phonics through spelling, and synthetic phonics.[216][217][218][78][219]
According to a 2018 review of research related toEnglish speaking poor readers, phonics training is effective for improving literacy-related skills, particularly the fluent reading of words and non-words, and the accurate reading of irregular words.[220]
In addition, phonics produces higher achievement for all beginning readers, and the greatest improvement is experienced by students who are at risk of failing to learn to read. While some children can infer these rules on their own, some need explicit instruction on phonics rules. Some phonics instruction has marked benefits such as the expansion of a student's vocabulary. Overall, children who are directly taught phonics are better at reading, spelling, and comprehension.[221]
A challenge in teaching phonics is that in some languages, such as English, complex letter-sound correspondences can confuse beginning readers. For this reason, it is recommended that teachers of English reading begin by introducing the "most frequent sounds" and the "common spellings", and save the less frequent sounds and complex spellings for later (e.g. the sounds /s/ and /t/ before /v/ and /w/; and the spellings cake beforeeight andcat before duck).[78][222][223]
Phonics is gainingworld-wide acceptance.
Phonics is taught in many different ways and it is often taught together with some of the following: oral language skills,[224][225]concepts about print,[226]phonological awareness,phonemic awareness,phonology, oral readingfluency, vocabulary,syllables,reading comprehension,spelling, word study,[227][228][229]cooperative learning,multisensory learning, andguided reading. And, phonics is often featured in discussions aboutscience of reading,[230][231]andevidence-based practices.
TheNational Reading Panel(U.S. 2000) is clear that "systematic phonics instruction should be integrated with other reading instruction to create a balanced reading program".[232]It suggests that phonics be taught together with phonemic awareness, oral fluency, vocabulary and comprehension.Researcher and educator Timothy Shanahan, a member of that panel, recommends that primary students receive 60–90 minutes per day of explicit, systematic, literacy instruction time; and that it be divided equally between a) words and word parts (e.g. letters, sounds, decoding and phonemic awareness), b) oral reading fluency, c) reading comprehension, and d) writing.[233]Furthermore, he states that "the phonemic awareness skills found to give the greatest reading advantage to kindergarten and first-grade children aresegmenting and blending".[234]
The Ontario Association of Deans of Education (Canada) published research Monograph # 37 entitledSupporting early language and literacywith suggestions for parents and teachers in helping children prior to grade one. It covers the areas of letter names and letter-sound correspondence (phonics), as well as conversation, play-based learning, print, phonological awareness, shared reading, and vocabulary.[235]
Some researchers report that teaching reading without teaching phonics is harmful to large numbers of students, yet not all phonics teaching programs produce effective results. The reason is that the effectiveness of a program depends on using the right curriculum together with the appropriate approach to instruction techniques, classroom management, grouping, and other factors.[236]Louisa Moats, a teacher, psychologist and researcher, has long advocated for reading instruction that is direct, explicit and systematic, covering phoneme awareness, decoding, comprehension, literature appreciation, and daily exposure to a variety of texts.[237]She maintains that "reading failure can be prevented in all but a small percentage of children with serious learning disorders. It is possible to teach most students how to read if we start early and follow the significant body of research showing which practices are most effective".[238]
Interest inevidence-based educationappears to be growing.[239]In 2021,Best evidence encyclopedia(BEE) released a review of research on 51 different programs for struggling readers in elementary schools.[240]Many of the programs used phonics-based teaching and/or one or more of the following:cooperative learning, technology-supported adaptive instruction (seeEducational technology),metacognitiveskills,phonemic awareness, word reading,fluency,vocabulary,multisensory learning,spelling,guided reading,reading comprehension, word analysis, structuredcurriculum, andbalanced literacy(non-phonetic approach).
The BEE review concludes that a) outcomes were positive for one-to-one tutoring, b) outcomes were positive, but not as large, for one-to-small group tutoring, c) there were no differences in outcomes between teachers and teaching assistants as tutors, d) technology-supported adaptive instruction did not have positive outcomes, e) whole-class approaches (mostly cooperative learning) and whole-school approaches incorporating tutoring obtained outcomes for struggling readers as large as those found for one-to-one tutoring, and benefitted many more students, and f) approaches mixing classroom and school improvements, with tutoring for the mostat-risk students, have the greatest potential for the largest numbers of struggling readers.[240]
Robert Slavin, of BEE, goes so far as to suggest that states should "hire thousands of tutors" to support students scoring far below grade level – particularly in elementary school reading. Research, he says, shows "only tutoring, both one-to-one and one-to-small group, in reading and mathematics, had aneffect sizelarger than +0.10 ... averages are around +0.30", and "well-trained teaching assistants using structured tutoring materials or software can obtain outcomes as good as those obtained by certified teachers as tutors".[241][242]
What works clearinghouseallows you to see the effectiveness of specific programs. For example, as of 2020 they have data on 231 literacy programs. If you filter them by grade 1 only, all class types, all school types, all delivery methods, all program types, and all outcomes you receive 22 programs. You can then view the program details and, if you wish, compare one with another.[243]
Evidence for ESSA[244](Center for Research and Reform in Education)[245]offers free up-to-date information on current PK–12 programs in reading, writing, math, science, and others that meet the standards of theEvery Student Succeeds Act(U.S.).[246]
ProvenTutoring.org[247]a non-profit organization, is a resource for educators interested in research-proven tutoring programs. The programs it lists are proven effective in rigorous research as defined in the 2015 Every Student Succeeds Act. The Center for Research and Reform in Education at Johns Hopkins University provides the technical support to inform program selection.[245]
Systematic phonicsis not one specific method of teaching phonics; it is a term used to describe phonics approaches that are taughtexplicitlyand in a structured, systematic manner. They aresystematicbecause the letters and the sounds they relate to are taught in a specific sequence, as opposed to incidentally or on a "when needed" basis.[249]
TheNational Reading Panel(NRP) in the U.S. concluded that systematic phonics instruction is more effective than unsystematic phonics or non-phonics instruction. The NRP also found that systematic phonics instruction is effective (with varying degrees) when delivered through one-to-one tutoring, small groups, and teaching classes of students; and is effective from kindergarten onward, the earlier the better. It helps significantly with word-reading skills and reading comprehension for kindergartners and 1st graders as well as for older struggling readers and reading-disabled students. Benefits to spelling were positive for kindergartners and 1st graders but not for older students.[250]
Systematic phonics is sometimes mischaracterised as "skill and drill" with little attention to meaning. However, researchers point out that this impression is false. Teachers can use engaging games or materials to teach letter-sound connections, and it can also be incorporated with the reading of meaningful text.[251]
Phonics can be taught systematically in a variety of ways, such as analogy phonics, analytic phonics, phonics through spelling, and synthetic phonics. However, their effectiveness varies considerably because the methods differ in such areas as the range of letter-sound coverage, the structure of the lesson plans, and the time devoted to specific instructions.[252]
Systematic phonics has gained increased acceptance in different parts of the world since the completion of three major studies into teaching reading; one in the US in 2000,[253][254]another in Australia in 2005,[255]and the other in the UK in 2006.[256]
In 2009, theDepartment for Educationin the UK published acurriculumreview for England that added support for systematic phonics.[257]In fact, systematic phonics in the UK is known assynthetic phonics.[258]
Beginning as early as 2014, several states in the United States have changed their curriculum to include systematic phonics instruction in elementary school.[259][260][261][262]
In 2018, the StateGovernment of Victoria, Australia, published a website containing a comprehensive Literacy Teaching Toolkit including Effective Reading Instruction, Phonics, and Sample Phonics Lessons.[263]
Analytic phonicsdoes not involve pronouncing individual sounds (phonemes) in isolation and blending the sounds, as is done in synthetic phonics. Rather, it is taught at the word level and students learn to analyze letter-sound relationships once the word is identified. For example, students analyze letter-sound correspondences such as theouspelling of/aʊ/in shrouds. Also, students might be asked to practice saying words with similar sounds such asball,bat andbite. Furthermore, students are taught consonant blends (separate, adjacent consonants) as units, such asbreak orshrouds.[264][265]
Analogy phonicsis a particular type ofanalytic phonicsin which the teacher has students analyze phonic elements according to the speech sounds (phonograms) in the word. For example, a type of phonogram (known in linguistics as arime) is composed of the vowel and the consonant sounds that follow it (e.g. in the wordscat, mat and sat,the rime is"at".) Teachers using the analogy method may have students memorize a bank of phonograms, such as-ator-am, or useword families(e.g. can, ran, man, or may, play, say).[266][264]
There have been studies on the effectiveness of instruction using analytic phonics vs. synthetic phonics. Johnston et al. (2012) conducted experimental research studies that tested the effectiveness of phonics learning instruction among 10-year-old boys and girls.[267]They used comparative data from the Clackmannanshire Report and chose 393 participants to compare synthetic phonics instruction and analytic phonics instruction.[268][267]The boys taught by the synthetic phonics method had better word reading than the girls in their classes, and their spelling and reading comprehension was as good. On the other hand, with analytic phonics teaching, although the boys performed as well as the girls in word reading, they had inferior spelling and reading comprehension. Overall, the group taught by synthetic phonics had better word reading, spelling, and reading comprehension. And, synthetic phonics did not lead to any impairment in the reading of irregular words.[267]
Embedded phonics, also known asincidental phonics, is the type of phonics instruction used inwhole languageprograms. It is notsystematic phonics.[269]Although phonics skills are de-emphasised in whole language programs, some teachers include phonics "mini-lessons" when students struggle with words while reading from a book. Short lessons are included based on phonics elements the students are having trouble with, or on a new or difficult phonics pattern that appears in a class reading assignment. The focus on meaning is generally maintained, but the mini-lesson provides some time for focus on individual sounds and the letters that represent them. Embedded phonics is different from other methods because instruction is always in the context of literature rather than in separate lessons about distinct sounds and letters; and skills are taught when an opportunity arises, not systematically.[270][271]
For some teachers, this is a method of teaching spelling by using the sounds (phonemes).[272]However, it can also be a method of teaching reading by focusing on the sounds and their spelling (i.e. phonemes and syllables). It is taught systematically with guided lessons conducted in a direct and explicit manner including appropriate feedback. Sometimesmnemoniccards containing individual sounds are used to allow the student to practice saying the sounds that are related to a letter or letters (e.g.a,e,i,o,u). Accuracy comes first, followed by speed. The sounds may be grouped by categories such as vowels that sound short (e.g. c-a-t and s-i-t). When the student is comfortable recognizing and saying the sounds, the following steps might be followed: a) the tutor says a target word and the student repeats it out loud, b) the student writes down each individual sound (letter) until the word is completely spelled, saying each sound as it is written, and c) the student says the entire word out loud. An alternate method would be to have the student use mnemonic cards to sound-out (spell) the target word.
Typically, the instruction starts with sounds that have only one letter and simple CVC words such assatandpin. Then it progresses to longer words, and sounds with more than one letter (e.g. hear and day), and perhaps even syllables (e.g. wa-ter). Sometimes the student practices by saying (or sounding-out) cards that contain entire words.[273]
Synthetic phonics, also known as blended phonics, is a systematic phonics method employed to teach students to read bysounding outthe letters and thenblendthe sounds to form the word. This method involves learning how letters or letter groups represent individual sounds, and that those sounds are blended to form a word. For example,shroudswould be read by pronouncing the sounds for each spelling,sh, r, ou, d, s(IPA/ʃ,r,aʊ,d,z/), then blending those sounds orally to produce a spoken word,sh – r – ou – d – s = shrouds(IPA/ʃraʊdz/). The goal of a synthetic phonics instructional program is that students identify the sound-symbol correspondences and blend their phonemes automatically. Since 2005, synthetic phonics has become the accepted method of teaching reading (by phonics instruction) in England, Scotland and Australia.[274][275][276][277]
The 2005Rose Reportfrom the UK concluded that systematicsynthetic phonicswas the most effective method for teaching reading. It also suggests the "best teaching" includes a brisk pace, engaging children's interest withmulti-sensory activitiesand stimulating resources, praise for effort and achievement; and above all, the full backing of the headteacher.[278]
It also has considerable support in someStatesin the U.S.[254]and some support from expert panels inCanada.[279]
In the US, a pilot program using the Core Knowledge Early Literacy program that used this type of phonics approach showed significantly higher results in K–3 reading compared with comparison schools.[280]In addition, several States such as California, Ohio, New York and Arkansas, are promoting the principles of synthetic phonics (seesynthetic phonics in the United States).
Resources for teaching phonics are availablehere.
Phonemic awarenessis the process by which thephonemes(sounds of oral language) are heard, interpreted, understood and manipulated – unrelated to theirgrapheme(written language). It is a sub-set ofPhonological awarenessthat includes the manipulation ofrhymes,syllables, andonsetsandrimes, and is most prevalent in alphabetic systems.[281]The specific part of speech depends on thewriting systememployed. TheNational Reading Panel(NPR) concluded that phonemic awareness improves a learner's ability to learn to read. When teaching phonemic awareness, the NRP found that better results were obtained with focused and explicit instruction of one or two elements, over five or more hours, in small groups, and using the correspondinggraphemes(letters).[282]See alsoSpeech perception. As mentioned earlier, some researchers feel that the most effective way of teaching phonemic awareness is through segmenting and blending, a key part ofsynthetic phonics.[234]
A critical aspect of reading comprehension is vocabulary development.[283]When a reader encounters an unfamiliar word in print and decodes it to derive its spoken pronunciation, the reader understands the word if it is in the reader's spoken vocabulary. Otherwise, the reader must derive the meaning of the word using another strategy, such as context. If the development of the child's vocabulary is impeded by things such as ear infections that inhibit the child from hearing new words consistently then the development of reading will also be impaired.[284]
Sight words(i.e. high-frequency or common words), sometimes called thelook-sayorwhole-word method, arenota part of the phonics method.[285]They are usually associated withwhole languageandbalanced literacywhere students are expected to memorize common words such as those on theDolch word listand the Fry word list (e.g. a, be, call, do, eat, fall, gave, etc.).[286][287]The supposition (in whole language and balanced literacy) is that students will learn to read more easily if they memorize the most common words they will encounter, especially words that are not easily decoded (i.e. exceptions).
On the other hand, using sight words as a method of teaching reading in English is seen as being at odds with thealphabetic principleand treating English as though it was alogographiclanguage (e.g.ChineseorJapanese).[288]
In addition, according to research, whole-word memorization is "labor-intensive", requiring on average about 35 trials per word.[289]Also, phonics advocates say that most words are decodable, so comparatively few words have to be memorized. And because a child will over time encounter many low-frequency words, "the phonological recoding mechanism is a very powerful, indeed essential, mechanism throughout reading development".[78]Furthermore, researchers suggest that teachers who withhold phonics instruction to make it easier on children "are having the opposite effect" by making it harder for children to gain basic word-recognition skills. They suggest that learners should focus on understanding the principles of phonics so they can recognize the phonemic overlaps among words (e.g. have, had, has, having, haven't, etc.), making it easier to decode them all.[290][291][292]
Sight vocabularyis a part of the phonics method. It describes words that are stored in long-term memory and read automatically. Skilled fully-alphabetic readers learn to store words in long-term memory without memorization (i.e. a mental dictionary), making reading and comprehension easier. "Once you know the sound-based way to decode, your mind learns what words look like, even if you're not especially trying to do so".[293]The process, calledorthographicmapping, involvesdecoding, crosschecking, mental marking and rereading. It takes significantly less time than memorization. This process works for fully-alphabetic readers when reading simple decodable words from left to right through the word.Irregular wordspose more of a challenge, yet research in 2018 concluded that "fully-alphabetic students" learn irregular words more easily when they use a process calledhierarchical decoding. In this process, students, rather than decode from left to right, are taught to focus attention on the irregular elements such as a vowel-digraph and a silent-e; for example, break (b – r –ea– k), height (h –eigh– t), touch (t –ou – ch), and make (m –a– ke). Consequentially, they suggest that teachers and tutors should focus on "teaching decoding with more advanced vowel patterns before expecting young readers to tackle irregular words". Others recommend teaching the high-frequency words (i.e. Fry word list) by "focusing on the sound-symbol relations" (i.e. phonics).[289][294][295]
Fluencyis the ability to read orally with speed, accuracy, andvocalexpression. The ability to read fluently is one of several critical factors necessary for reading comprehension. If a reader is not fluent, it may be difficult to remember what has been read and to relate the ideas expressed in the text to their background knowledge. This accuracy andautomaticityof reading serves as a bridge between decoding and comprehension.[296]
One way to improve fluency isrereading(the student rereads a passage aloud several times with vocal expression). Another isassisted reading(the student visually reads a text while simultaneously hearing someone else fluently read the same text).[297]
The NRP describes reading comprehension as a complexcognitiveprocess in which a reader intentionally and interactively engages with the text. Thescience of readingsays that reading comprehension is heavily dependent on word recognition (i.e., phonological awareness, decoding, etc.) and oral language comprehension (i.e., background knowledge, vocabulary, etc.).[298]Phonological awareness and rapid naming predict reading comprehension in second grade but oral language skills account for an additional 13.8% of the variance.[299]
It has also been found that sustained content literacy intervention instruction that gradually builds thematic connections may help young children transfer their knowledge to related topics, leading to improved comprehension.[300]
The American educator,Eric "E. D." Donald Hirsch Jr., suggests that students need to learn about something in order to read well.[301]However, some researchers say reading comprehension instruction has become "content agnostic", focused on skill practice (such as "finding the main idea"), to the detriment of learning about science, history, and other disciplines. Instead, they say teachers should find ways to integrate content knowledge with reading and writing instruction. One approach is to merge the two – to embed literacy instruction into social studies and science. Another approach is to build content knowledge into reading classes, often called "high-quality or "content-rich" curricula.[302][303]However, according toNatalie Wexler, in her bookThe Knowledge Gap, "making the shift to knowledge is as much about changing teachers' beliefs and daily practice as about changing the materials they're supposed to use".[304]
Researcher and educator Timothy Shanahanbelieves the most effective way to improve reading comprehension skills is to teach students to summarize, develop an understanding of text structure, and paraphrase.[305]
Evidence supports the strong synergy between reading (decoding) andspelling(encoding), especially for children in kindergarten or grade one and elementary school students at risk for literacy difficulties. Students receiving encoding instruction and guided practice that included using (a) manipulatives such as letter tiles to learn phoneme-grapheme relationships and words and (b) writing phoneme-grapheme relationships and words made from these correspondences significantly outperformed contrast groups not receiving encoding instruction.[306][307]
Research supports the use of embedded, picturemnemonic(memory support) alphabet cards when teaching letters and sounds, but not words.[308][309][310]
Whole languagehas the reputation of being a meaning-based method of teaching reading that emphasizes literature and text comprehension. It discourages any significant use of phonics, if at all.[312]Instead, it trains students to focus on words, sentences and paragraphs as a whole rather than letters and sounds. Students are taught to use context and pictures to "guess" words they do not recognize, or even just skip them and read on. It aims to make reading fun, yet many students struggle to figure out the specific rules of the language on their own, which causes the student's decoding and spelling to suffer.
The following are some features of the whole language philosophy:
As of 2020, whole language is widely used in the US and Canada (often asbalanced literacy); however, in some US States and many othercountries, such as Australia and the United Kingdom, it has lost favor or been abandoned because it is not supported by evidence.[317][318][319]Some notable researchers have clearly stated their disapproval ofwhole languageandwhole-wordteaching. In his 2009 book,Reading in the brain, cognitive neuroscientist,Stanislas Dehaene, said "cognitive psychology directly refutes any notion of teaching via a 'global' or 'whole language' method". He goes on to talk about "the myth of whole-word reading", saying it has been refuted by recent experiments. "We do not recognize a printed word through a holistic grasping of its contours, because our brain breaks it down into letters and graphemes".[311]In addition, cognitive neuroscientistMark Seidenberg, in his 2017 bookLanguage at the speed of light, refers to whole language as a "theoretical zombie" because it persists despite a lack of supporting evidence.[320][321][317]
Balanced literacyis not well defined; however, it is intended as a method that combines elements of both phonics and whole language.[322]According to a survey in 2010, 68% of elementary school teachers in the United States profess to use balanced literacy.[323]However, only 52% of teachers in the United States includephonicsin their definition ofbalanced literacy.
The National Reading Panel concluded that phonics must be integrated with instruction in phonemic awareness, vocabulary, fluency, and comprehension. And, some studies indicate that "the addition of language activities and tutoring to phonics produced larger effects than any of these components in isolation". They suggest that this may be a constructive way to view balanced reading instruction.[324]
However, balanced literacy has received criticism from researchers and others suggesting that, in many instances, it is merelywhole languageby another name.[325][326][327][328][329]
According to phonics advocate and cognitive neuroscientistMark Seidenberg, balanced literacy allows educators to defuse thereading warswhile not making specific recommendations for change.[221]He goes on to say that, in his opinion, the high number of struggling readers in the United States is the result of how teachers are taught to teach reading.[330][108][331][332]He also says that struggling readers should not be encouraged to skip a challenging word, nor rely on pictures or semantic and syntactic cues to "guess at" a challenging word. Instead, they should useevidence-baseddecoding methods such assystematic phonics.[333][334][335]
Structured literacy has many of the elements ofsystematic phonicsand few of the elements of balanced literacy.[336]It is defined as explicit, systematic teaching that focuses on phonological awareness, word recognition, phonics and decoding, spelling, and syntax at the sentence and paragraph levels. It is considered to be beneficial for all early literacy learners, especially those withdyslexia.[337][338][339]
According to theInternational Dyslexia Association, structured literacy contains the elements ofphonologyandphonemic awareness, sound-symbol association (thealphabetic principleandphonics),syllables,morphology,syntax, andsemantics. The elements are taught using methods that are systematic, cumulative, explicit,multisensory, and use diagnostic assessment.[340]
The three-cueing system (the searchlights model in England) is a theory that has been circulating since the 1980s, yet it is not supported by research.[341]Its roots are in the theories proposed in the 1960s byKen GoodmanandMarie Claythat eventually becamewhole language,reading recoveryand guided reading (e.g.,Fountas and Pinnellearly reading programs).[342]As of 2010, 75% of teachers in the United States teach the three-cueing system.[323]It proposes that children who are stuck on a word should use various "cues" to figure it out and determine (guess) its meaning. The "meaning cues" are semantic ("does it make sense in the context?"), syntactic (is it a noun, verb, etc.?) and graphophonic (what are the letter-sound relationships?). It is also known as MSV (Meaning,Sentence structure/syntax andVisual information such as the letters in the words).
According to some, three-cueing is not the most effective way for beginning readers to learn how to decode printed text.[343]While a cueing system does help students to "make better guesses", it does not help when the words become more sophisticated; and it reduces the amount of practice time available to learn essential decoding skills. They also say that students should first decode the word, "then they can use context to figure out the meaning of any word they don't understand".
Consequently, researchers such as cognitive neuroscientistsMark SeidenbergandTimothy Shanahando not support the theory. They say the three-cueing system's value in reading instruction "is a magnificent work of the imagination", and it developed not because teachers lack integrity, commitment, motivation, sincerity, or intelligence, but because they "were poorly trained and advised" about thescience of reading.[344][345]In England, thesimple view of readingandsynthetic phonicsare intended to replace "the searchlights multi-cueing model".[346][347]On the other hand, some researchers suggest that "context" can be useful, not to guess a word, but to confirm a word after it has been phonetically decoded.[348]
The three Ps approach is used by teachers, tutors, and parents to guide oral reading practice with a struggling reader.[349]For some, it is merely a variation of the above-mentionedthree-cueing system.
However, for others it is very different.[350]For example: when a student encounters a word they do not know or get it wrong, the three steps are: 1) pause to see if they can fix it themselves, even letting them read on a little, 2) prompt them with strategies to find the correct pronunciation, and 3) praise them directly and genuinely. In thepromptstep, the tutor does not suggest the student skip the word or guess the word based on the pictures or the first sound. Instead, they encourage students to use their decoding training to sound out the word and use the context (meaning) to confirm they have found the correct word.
Guided readingis small group reading instruction that is intended to allow for the differences in students' reading abilities.[351]While they are reading, students are encouraged to use strategies from the three-cueing system, the searchlights model, or MSV.
It is no longer supported by thePrimary National Strategyin England assynthetic phonicsis the officially recognized method for teaching reading.[352][353]
In the United States, guided reading is part of the Reading Workshop model of reading instruction.[354]
Thereading workshop modelprovides students with a collection of books, allows them the choice of what to read, limits students' reading to texts that can be easily read by them, provides teaching through mini-lessons, and monitors and supports reading comprehension development through one-on-one teacher-student conferences. Some reports state that it is 'unlikely to lead to literacy success' for all students, particularly those lacking foundational skills.[355][356]
Shared (oral) readingis an activity whereby the teacher and students read from a shared text that is determined to be at the students' reading level.
Leveled readinginvolves students reading from "leveled books" at an appropriate reading level. A student who struggles with a word is encouraged to use a cueing system (e.g. three-cueing, searchlights model or MSV) to guess its meaning. Many systems purport to gauge the students' reading levels using scales incorporating numbers, letters, colors, and lexile readability scores.[357]
Silent reading (and self-teaching)is a common practice in elementary schools. A 2007 study in the United States found that, on average only 37% of class time was spent on active reading instruction or practice, and the most frequent activity was students reading silently. Based on the limited available studies onsilent reading, theNRPconcluded that independent silent reading did not prove an effective practice when used as the only type of reading instruction to develop fluency and other reading skills – particularly with students who have not yet developed critical alphabetic and word reading skills.[358]
Other studies indicate that, unlike silent reading, "oral reading increases phonological effects".
According to some, the classroom method called DEAR (Drop everything and read) is not the best use of classroom time for students who are not yet fluent.[359]However, according to theself-teaching hypothesis, when fluent readers practice decoding words while reading silently, they learn what whole words look like (spelling), leading to improved fluency and comprehension.[360][361]
The suggestion is: "if some students are fluent readers, they could read silently while the teacher works with the struggling readers".
Languages such asChineseand Japanese are normally written (fully or partly) inlogograms(hanziandkanji, respectively), which represent a whole word ormorphemewith a single character. There are a large number of characters, and the sound that each makes must be learned directly or from other characters that contain "hints" in them. For example, in Japanese, theOn-readingof the kanji 民 isminand the related kanji 眠 shares the same On-reading,min: the right-hand part shows the character's pronunciation. However, this is not true for all characters.Kun readings, on the other hand, have to be learned and memorized as there is no way to tell from each character.
Ruby charactersare used in textbooks to help children learn the sounds that each logogram makes. These are written in a smaller size, using an alphabetic orsyllabicscript. For example,hiraganais typically used in Japanese, and thepinyinromanizationinto Latin alphabet characters is used in Chinese.
The examples above each spell the wordkanji, which is made up of two kanji characters: 漢 (kan, written in hiragana as かん), and 字 (ji, written in hiragana as じ).
Textbooks are sometimes edited as a cohesive set across grades so that children will not encounter characters they are not yet expected to have learned.
For decades, the merits of phonics vs.whole languagehave been debated. It is sometimes referred to as thereading wars.[362][363]
Phonics was a popular way to learn reading in the 19th century.William Holmes McGuffey(1800–1873), an American educator, author, and Presbyterian minister who had a lifelong interest in teaching children, compiled the first four of theMcGuffey Readersin 1836.[364]
In 1841Horace Mann, the Secretary of theMassachusetts Board of Education, advocated for a whole-word method of teaching reading to replace phonics. Others advocated for a return to phonics, such asRudolf Fleschin his bookWhy Johnny Can't Read(1955).
The whole-word method received support fromKenneth J. Goodmanwho wrote an article in 1967 entitledReading: A psycholinguistic guessing game. In it, he says efficient reading is the result of the "skill in selecting the fewest, most productive cues necessary to produce guesses which are right the first time".[365]Although not supported by scientific studies, the theory became very influential as thewhole languagemethod.[366][321]Since the 1970s some whole language supporters such asFrank Smith, are unyielding in arguing that phonics should be taught little, if at all.[367]
Yet, other researchers say instruction in phonics andphonemic awarenessare "critically important" and "essential" to developing early reading skills.[333][368][78]In 2000, theNational Reading Panel(U.S.) identified five ingredients of effective reading instruction, of which phonics is one; the other four are phonemic awareness, fluency, vocabulary and comprehension.[126]Reports from other countries, such as the Australian report onTeaching reading(2005)[255]and the U.K.Independent review of the teaching of early reading (Rose Report 2006)have also supported the use of phonics.
Some notable researchers such asStanislas DehaeneandMark Seidenberghave clearly stated their disapproval ofwhole language.[369][370]
Furthermore, a 2017 study in the UK that compared teaching with phonics vs. teaching whole written words concluded that phonics is more effective, saying "our findings suggest that interventions aiming to improve the accuracy of reading aloud and/or comprehension in the early stages of learning should focus on the systematicity present in print-to-sound relationships, rather than attempting to teach direct access to the meanings of whole written words".[371]
More recently, some educators have advocated for the theory ofbalanced literacypurported to combine phonics and whole language yet not necessarily consistently or systematically. It may include elements such as word study and phonics mini-lessons, differentiated learning, cueing, leveled reading, shared reading, guided reading, independent reading, and sight words.[372][373][374][375]According to a survey in 2010, 68% of K–2 teachers in the United States practice balanced literacy; however, only 52% of teachers includedphonicsin their definition ofbalanced literacy. In addition, 75% of teachers teach thethree-cueing system(i.e., meaning/structure/visual or semantic/syntactic/graphophonic) that has its roots in whole language.[323][376]
In addition, some phonics supporters assert thatbalanced literacyis merelywhole languageby another name.[377]And critics of whole language and sceptics of balanced literacy, such as neuroscientistMark Seidenberg, state that struggling readers shouldnotbe encouraged to skip words they find puzzling or rely on semantic and syntactic cues to guess words.[333][327][378]
Over time a growing number of countries and states have put greater emphasis on phonics and otherevidence-based practices(seePhonics practices by country or region).
According to the report by the USNational Reading Panel(NRP) in 2000,[126][379]the elements required for proficient reading ofalphabeticlanguages arephonemic awareness,phonics,fluency,[296]vocabulary,[283]andtext comprehension. In non-Latin languages, proficient reading does not necessarily requirephonemic awareness, but rather an awareness of the individual parts of speech, which may also include the whole word (as in Chinese characters) or syllables (as in Japanese) as well as others depending on the writing system being employed.
TheRose Report, from theDepartment for Educationin England makes it clear that, in their view,systematic phonics, specificallysynthetic phonics, is the best way to ensure that children learn to read; such that it is now the law.[256][380][381][382]In 2005 the government ofAustraliapublished a report stating "The evidence is clear ... that direct systematic instruction in phonics during the early years of schooling is an essential foundation for teaching children to read".[383]Phonics has been gaining acceptance in many other countries as can be seen from this pagePractices by country or region.
Other important elements are:rapid automatized naming(RAN),[384][385]a general understanding of theorthographyof the language, and practice.
Difficulties in reading typically involve difficulty with one or more of the following: decoding, reading rate, reading fluency, or reading comprehension.
Brain activity in young and older children can be used to predict future reading skills. Cross-model mapping between the orthographic and phonologic areas in the brain is critical in reading. Thus, the amount of activation in the left dorsal inferior frontal gyrus while performing reading tasks can be used to predict later reading ability and advancement. Young children with higher phonological word characteristic processing have significantly better reading skills later on than older children who focus on whole-word orthographic representation.[388]
Difficulty with decoding is marked by having not acquired thephoneme-graphememapping concept. One specific disability characterized by poor decoding isdyslexia, a brain-based learning disability that specifically impairs a person's ability to read.[389]These individuals typically read at levels significantly lower than expected despite having normal intelligence. It can also be inherited in some families, and recent studies have identified a number of genes that may predispose an individual to developing dyslexia. Although the symptoms vary from person to person, common characteristics among people with dyslexia are difficulty with spelling, phonological processing (the manipulation of sounds), and/or rapid visual-verbal responding.[389]Adults can have either developmental dyslexia[390][391][392][393]oracquired dyslexiawhich occurs after abrain injury,stroke[394][395]ordementia.[396][397][391][392][394][395]
Individuals with reading rate difficulties tend to have accurate word recognition and normal comprehension abilities, but their reading speed is below grade level.[398]Strategies such asguided reading(guided, repeated oral-reading instruction), may help improve a reader's reading rate.[399]
Many studies show that increasing reading speed improves comprehension.[400]Reading speed requires a long time to reach adult levels. According to Carver (1990), children's reading speed increases throughout the school years. On average, from grade 2 to college, the reading rate increases 14 standard-length words per minute each year (where one standard-length word is defined as six characters in text, including punctuation and spaces).[401]
Scientific studies have demonstrated thatspeed reading– defined here as capturing and decoding words faster than 900 wpm – is not feasible given the limits set by the anatomy of the eye.[402]
Individuals with reading fluency difficulties fail to maintain a fluid, smooth pace when reading. Strategies used for overcoming reading rate difficulties are also useful in addressing reading fluency issues.[379]
Individuals withreading comprehensiondifficulties are commonly described as poor comprehenders.[403]They have normal decoding skills as well as a fluid rate of reading, but have difficulty comprehending text when reading. Thesimple view of readingholds that reading comprehension requires bothdecoding skillsandoral language comprehensionability.[404]
Increasing vocabulary knowledge, listening skills, and teaching basic comprehension techniques may help facilitate better reading comprehension. It is suggested that students receive brief, explicit instruction in reading comprehension strategies in the areas of vocabulary, noticing understanding, and connecting ideas.[405]
Scarborough's Reading RopeandThe active view of reading modelalso outline some of the essential ingredients of reading comprehension.
In some countries, aradio reading serviceprovides a service forblindpeople and others who choose to hearnewspapers, books, and other printed material read aloud, typically by volunteers. An example isAustralia'sRadio Print Handicapped Networkwith stations in capital cities and some other areas.
The following organizations measure and report on reading achievement in the United States and internationally:
In the United States, the National Assessment of Educational Progress orNAEP("The Nation's Report Card") is the national assessment of what students know and can do in various subjects. Four of these subjects – reading, writing, mathematics, and science – are assessed most frequently and reported at the state and district level, usually for grades 4 and 8.[406]
In 2019, with respect to the reading skills of the nation's grade-four public school students, 35% performed at or above the NAEPProficient level(solid academic performance), and 65% performed at or above the NAEPBasic level(partial mastery of the proficient level skills). It is believed that students who read below the basic level do not have sufficient support to complete their schoolwork.[407]
Reading scores for the individual States and Districts are available on the NAEP site. Between 2017 and 2019Mississippiwas the only State that had a grade-four reading score increase and 17 States had a score decrease.[408]
TheCOVID-19 pandemichad a significant impact on reading results in the United States. In 2022 the average basic-level reading score among elementary schoolchildren was 3 points lower compared to 2019 (the previous assessment year) and roughly equivalent to the first reading assessment in 1992. Students of all ethnic groups other than Asians saw their scores decline. However, "black, Hispanic, and American Indian/Alaska Native (AIAN) students and students in high-poverty schools were disproportionately impacted". (This was substantiated by other sources).[409]In 2022, no states had a reading score increase and 30 states had a score decrease.[410]The results by race or ethnicity were as follows:[85]
NAEP reading assessment results are reported as average scores on a 0–500 scale.[411]The Basic Level is 208 and the Proficient Level is 238.[412]The average reading score for grade-four public school students was 219.[413]Female students had an average score that was 7 points higher than male students. Students who were eligible for theNational School Lunch Program (NSLP)had an average score that was 28 points lower than that for students who were not eligible.
The Programme for the International Assessment of Adult Competencies (PIAAC) is an international study by the Organisation for Economic Co-operation and Development (OECD) of cognitive and workplace skills in 39 countries between 2011 and 2018.[75]The Survey measures adults' proficiency in key information-processing skills – literacy, numeracy, and problem-solving. The focus is on the working-age population between the ages of 16 and 65. For example, the study shows the ranking of 38 countries as to theliteracy proficiency among adults. According to the 2019 OECD report, the five countries with the highest ranking are Japan, Finland, the Netherlands, Sweden, and Australia; whereas Canada is 12th, England (UK) is 16th, and the United States is 19th.[157]It is also worth noting that the PIAAC table A2.1 (2013) shows the percentage of adults readingat-or-below level one(out of five levels). Some examples are Japan 4.9%, Finland 10.6%, Netherlands 11.7%, Australia 12.6%, Sweden 13.3%, Canada 16.4%, England 16.4%, and the United States 16.9%.[76]
The Progress in International Reading Literacy Study (PIRLS) is an international study of reading (comprehension) achievement in fourth graders.[414]It is designed to measure children's reading literacy achievement, to provide a baseline for future studies of trends in achievement, and to gather information about children's home and school experiences in learning to read. The 2021 PIRLS report shows the 4th-grade reading achievement by country in two categories (literary and informational). The ten countries with the highest overall reading average (with scores) are Singapore (587), Ireland (577), Hong Kong SAR (573), Russian Federation (567), Northern Ireland (566), England (UK) (558), Croatia (557), Lithuania (552), Finland (549), and Poland (549). Some others are the United States (548) 11th and Australia (548) 13th. Among the benchmarking participants are the four Canadian provinces of Alberta (539), British Columbia (535), Newfoundland and Labrador (523), and Quebec (551).[415]
The Programme for International Student Assessment (PISA) measures 15-year-old school pupils scholastic performance on mathematics, science, and reading.[71]In 2018, of the 79 participating countries/economies, on average, students in Beijing, Shanghai, Jiangsu and Zhejiang (China), and Singapore outperformed students from all other countries in reading, mathematics, and science. 21 countries have reading scores above the OECD average scores and many of the scores are not statistically different.[416][417]
Critics, however, say PISA is fundamentally flawed in its underlying view of education, its implementation, and its interpretation and impact on education globally.[72]In 2014, more than 100 academics from around the world called for a moratorium on PISA.[73][74]According to a 2023 book, PISA is failing in its mission. It suggests that flatlined student outcomes and policy shortcomings have much to do with PISA's implicit ideological biases, structural impediments such as union advocacy, and conflicts of interest.[418]
The Education Quality and Accountability Office,EQAO, is an agency of the government of Ontario, Canada that reports on the publicly funded school system.[419]In 2022, it reported that 77% of grade three students in Ontario's English language schools met the provincial standard in reading in 2018–2019. This decreased to 73% in 2021–2022 and 2022–2023.[420]
53% of grade three students with special needs met the standard in 2018–2019, and this reduced to 48% in 2021–2022. 72% of grade three students who are English language learners met the standard in 2018–2019, and this reduced to 67% in 2021–2022.[421]
The history of reading dates back to theinvention of writingduring the 4th millennium BC. Although readingprinttext is now an important way for the general population to access information, this has not always been the case. Withsome exceptions, only a small percentage of the population in many countries was consideredliteratebefore theIndustrial Revolution. Some of the pre-modern societies with generally high literacy rates includedclassical Athensand the Islamiccaliphate.[422]
Scholars assume that reading aloud (Latinclare legere) was the more common practice in antiquity, and that reading silently (legere taciteorlegere sibi) was unusual.[423]In hisConfessions(c.400),Saint Augustineremarks onSaint Ambrose's unusual habit of reading in silence.[423][424]
Michel de Certeauargued that while theAge of Enlightenmentinitially promoted the virtue of reading, writing was still considered a superior activity, due to a belief among social elites that writing was constructive and a sign of social initiative, while reading was straightforward consumption of what had already made; as such, readers werepassive citizens.[425]
Before the mid-18th century, children's books in England usually focused on instruction or religious themes. Over time, a greater number of books were written with the intent of delighting children; for example, children's novels became increasingly popular over the 18th century. By 1800, the area of children's literature was flourishing, with perhaps as many as 50 books being printed every year in major cities.[426]
In 18th-century Europe, some considered the then-new practice of reading alone in bed to be dangerous and immoral, for a time. As reading became a less communal, largely silent activity, some raised concerns that reading in bed presented various dangers, such as fires caused by bedside candles of people reading before sleep. Some modern critics speculate that these concerns were rooted partially in fear that readers – especially women readers – would shirk their obligations to their family and community, and even transgress moral boundaries via the private fantasy afforded by books.[427]Also during the 18th century in England, reading novels was often criticized as a time-wasting pastime, when contrasted with the cultural seriousness carried by reading history, classical literature or poetry.[428]
Chapbookswere small, cheap forms of literature for children and adults that were sold on the streets, and covered a range of subjects such as ghost stories, crime, fantasy, politics, and disaster updates. They provided simple reading matter and were commonplace across England from the 17th to the 19th century. They are known to have been passed down through the generations. Their readership would have been largely among the poor, and among children of the middle class.[429]
Reading became even more pronounced in the 19th century with public notes, broadsides, catchpennies, and printed songs becoming common street literature, it informed and entertained the public before newspapers became readily available. Advertisements and local news, such as offers of rewards for catching criminals or for the return of stolen goods, appeared on public notices and handbills, while cheaply printed sheets – broadsheets and ballads – covered political or criminal news such as murders, trials, executions, disasters, and rescues.[430]
Technological improvements during the Industrial Revolution in printing and paper production; and new distribution networks enabled by improved roads and rail helped push an increased demand for printed (reading) matter. Besides this, social and educational changes (such as wider schooling rates) along with increasing literacy rates, particularly among the middle and working classes, helped boost a new mass market for printed material.[431]The arrival of gas and electric lighting in private homes meant that reading after dark no longer had to take place by oil lamp or candlelight.[428]
In 19th-centuryRussia, reading practices were highly varied, as people from a wide range of social statuses read Russian and foreign-language texts ranging from high literature to the peasantlubok.[432]Provincial readers such as Andrei Chikhachev give evidence of the omnivorous appetite for fiction and non-fiction alike among middling landowners.[433]
In the 20th and 21st centuries, the use of audiobooks for reading has become increasingly popular. Although some contest the validity of audiobook usage as actual reading since it does not generally involve direct contact with written word but rather uses a mediating narrator, audiobooks are seen by many as a continuation of oral tradition as well as an accessibility measure for the visually impaired.[434]The popularity of audiobooks in the 21st century in the US is possibly due to technological advances that make such materials widely available for download, often through public libraries.
The history of learning to read dates back to theinvention of writingduring the 4th millennium BC.[435]
Concerning the English language in the United States, thephonicsprinciple of teaching reading was first presented byJohn Hartin 1570, who suggested the teaching of reading should focus on the relationship between what is now referred to asgraphemes(letters) andphonemes(sounds).[436]
In thecolonial timesof the United States, reading material was not written specifically for children, so instruction material consisted primarily of the Bible and some patriotic essays. The most influential early textbook wasThe New England Primer, published in 1687. There was little consideration given to the best ways to teach reading or assess reading comprehension.[437][438]
Phonics was a popular way to learn reading in the 1800s.William Holmes McGuffey(1800–1873), an American educator, author, and Presbyterian minister who had a lifelong interest in teaching children, compiled the first four of theMcGuffey Readersin 1836.[364]
The whole-word method was introduced into the English-speaking world byThomas Hopkins Gallaudet, the director of theAmerican School for the Deaf.[439]It was designed to educate deaf people by placing a word alongside a picture.[440]In 1830, Gallaudet described his method of teaching children to recognize a total of 50 sight words written on cards.[441][442]Horace Mann, the Secretary of the Board of Education of Massachusetts, U.S., favored the method for everyone, and by 1837 the method was adopted by theBostonPrimary School Committee.[443]
By 1844 the defects of the whole-word method became so apparent to Boston schoolmasters that they urged the Board to return to phonics.[444]In 1929,Samuel Orton, aneuropathologistinIowa, concluded that the cause of children's reading problems was the newsight methodof reading. His findings were published in the February 1929 issue of theJournal of Educational Psychologyin the article "The Sight Reading Method of Teaching Reading as a Source of Reading Disability".[445]
The meaning-based curriculum came to dominate reading instruction by the second quarter of the 20th century. In the 1930s and 1940s, reading programs became very focused on comprehension and taught children to read whole words by sight. Phonics was taught as a last resort.[437]
Edward William Dolchdeveloped his list ofsight wordsin 1936 by studying the most frequently occurring words in children's books of that era. Children are encouraged to memorize the words with the idea that it will help them read more fluently. Many teachers continue to use this list, although some researchers consider the theory of sight word reading to be a "myth". Researchers and literacy organizations suggest it would be more effective if students learned the words using a phonics approach.[311][446][447]
In 1955,Rudolf Fleschpublished a book entitledWhy Johnny Can't Read, a passionate argument in favor of teaching children to read using phonics, adding to the reading debate among educators, researchers, and parents.[448]
Government-funded research on reading instruction in the United States and elsewhere began in the 1960s. In the 1970s and 1980s, researchers began publishing studies with evidence on the effectiveness of different instructional approaches. During this time, researchers at theNational Institutes of Health(NIH) conducted studies that showed early reading acquisition depends on the understanding of the connection between sounds and letters (i.e. phonics). However, this appears to have had little effect on educational practices in public schools.[449][450]
In the 1970s, thewhole languagemethod was introduced. This method de-emphasizes the teaching of phonics out of context (e.g. reading books), and is intended to help readers "guess" the right word.[451]It teaches that guessing individual words should involve three systems (letter clues, meaning clues from context, and the syntactical structure of the sentence). It became the primary method of reading instruction in the 1980s and 1990s. However, it is falling out of favor. The neuroscientistMark Seidenbergrefers to it as a "theoretical zombie" because it persists despite a lack of supporting evidence.[370][319]It is still widely practiced in related methods such assight words, thethree-cueing systemandbalanced literacy.[452][449][453]
In the 1980s thethree-cueing system(the searchlights model in England) emerged. According to a 2010 survey 75% of teachers in the United States teach the three-cueing system.[323]It teaches children to guess a word by using "meaning cues" (semantic, syntactic and graphophonic). While the system does help students to "make better guesses", it does not help when the words become more sophisticated; and it reduces the amount of practice time available to learn essential decoding skills. Consequently, present-day researchers such as cognitive neuroscientistsMark Seidenbergand professorTimothy Shanahando not support the theory.[341][344][345]In England,synthetic phonicsis intended to replace "the searchlights multi-cueing model".[346][347]
In the 1990sBalanced literacyarose. It is a theory of teaching reading and writing that is not clearly defined. It may include elements such as word study and phonics mini-lessons, differentiated learning, cueing, leveled reading, shared reading, guided reading, independent reading and sight words.[372][373][374][375]For some, balanced literacy strikes a balance betweenwhole languageandphonics. Others say balanced literacy in practice usually means thewhole languageapproach to reading.[454]According to a survey in 2010, 68% of K–2 teachers in the United States practice balanced literacy. Furthermore, only 52% of teachers includedphonicsin their definition ofbalanced literacy.[323]
In 1996 theCaliforniaDepartment of Education took an increased interest in using phonics in schools.[455]And in 1997 the department called for grade one teaching in concepts about print, phonemic awareness, decoding and word recognition, and vocabulary and concept development.[456]
By 1998 in the U.K. whole language instruction and the searchlights model were still the norm; however, there was some attention to teaching phonics in the early grades, as seen in the National Literacy Strategies.[457][458]
Beginning in 2000, several reading research reports were published:
For more on this, see the main articleHistory of learning to read
For more information on reading educational developments, seePhonics practices by country or region.
|
https://en.wikipedia.org/wiki/Reading
|
Reading comprehensionis the ability to process written text,understandits meaning, and to integrate with what the reader already knows.[1][2][3][4]Readingcomprehensionrelies on two abilities that are connected to each other: word reading and language comprehension.[5]Comprehension specifically is a "creative, multifaceted process" that is dependent upon fourlanguage skills:phonology,syntax,semantics, andpragmatics.[6]Reading comprehension is a part ofliteracy.
Some of the fundamental skills required in efficient reading comprehension are the ability to:[7][8][9]
Comprehension skills that can be applied as well as taught to all reading situations include:[10]
There are many reading strategies to use in improving reading comprehension and inferences, these include improving one's vocabulary, critical text analysis (intertextuality, actual events vs. narration of events, etc.), and practisingdeep reading.[11]The ability to comprehend text is influenced by the readers' skills and their ability to process information. If word recognition is difficult, students tend to use too much of their processing capacity to read individual words which interferes with their ability to comprehend what is read.
Some people learn comprehension skills through education or instruction and others learn through direct experiences.[12]Proficient reading depends on the ability to recognize words quickly and effortlessly.[13]It is also determined by an individual's cognitive development, which is "the construction of thought processes".
There are specific characteristics that determine how successfully an individual will comprehend text, including prior knowledge about the subject, well-developed language, and the ability to make inferences from methodicalquestioning& monitoring comprehension like: "Why is this important?" and "Do I need to read the entire text?" are examples of passage questioning.[14]
Instruction for comprehension strategy often involves initially aiding the students bysocial and imitation learning, wherein teachers explain genre styles and model both top-down and bottom-up strategies, and familiarize students with a required complexity of text comprehension.[15]After the contiguity interface, the second stage involves thegradual release of responsibilitywherein over time teachers give students individual responsibility for using the learned strategies independently with remedial instruction as required and this helps in error management.
The final stage involves leading the students to aself-regulated learningstate with more and more practice and assessment, it leads to overlearning and the learned skills will become reflexive or "second nature".[16]The teacher as reading instructor is a role model of a reader for students, demonstrating what it means to be an effective reader and the rewards of being one.[17]
Reading comprehension involves twolevels of processing, shallow (low-level) processing and deep (high-level) processing.
Deepprocessinginvolvessemantic processing, which happens when we encode the meaning of a word and relate it to similar words.Shallowprocessinginvolves structural and phonemic recognition, the processing of sentence and word structure, i.e.first-order logic, and their associated sounds. This theory was first identified byFergus I. M. Craikand Robert S. Lockhart.[18]
Comprehension levels are observed throughneuroimagingtechniques likefunctional magnetic resonance imaging(fMRI). fMRI is used to determine the specific neural pathways of activation across two conditions: narrative-level comprehension, and sentence-level comprehension. Images showed that there was less brain region activation during sentence-level comprehension, suggesting a shared reliance with comprehension pathways. The scans also showed an enhanced temporal activation during narrative levels tests, indicating this approach activates situation and spatial processing.[19]
In general, neuroimaging studies have found that reading involves three overlapping neural systems: networks active in visual,orthography-phonology(angular gyrus), and semantic functions (anteriortemporal lobewithBroca'sandWernicke'sareas). However, these neural networks are not discrete, meaning these areas have several other functions as well. The Broca's area involved in executive functions helps the reader to vary depth of reading comprehension and textual engagement in accordance with reading goals.[20][21]
Reading comprehension and vocabulary are inextricably linked together. The ability to decode or identify and pronounce words is self-evidently important, but knowing what the words mean has a major and direct effect on knowing what any specific passage means whileskimminga reading material. It has been shown that students with a smaller vocabulary than other students comprehend less of what they read.[22]It has also been suggested that to improve comprehension, improving word groups, complex vocabularies such ashomonymsor words that have multiple meanings, and those with figurative meanings likeidioms,similes,collocationsandmetaphorsare a good practice.[23]
Andrew Biemiller argues that teachers should give out topic-related words andphrasesbefore reading a book to students. Note also that teaching includes topic-related word groups, synonyms of words, and their meaning with the context. He further says teachers should familiarize students with sentence structures in which these words commonly occur.[24]According to Biemiller, this intensive approach gives students opportunities to explore the topic beyond its discourse – freedom of conceptual expansion. However, there is no evidence to suggest the primacy of this approach.[25]Incidentalmorphemicanalysis of words – prefixes, suffixes and roots – is also considered to improve understanding of the vocabulary, though they are proved to be an unreliable strategy for improving comprehension and is no longer used to teach students.[26]
Vocabulary is important as it is what connects a reader to the text, while helping develop background knowledge, their own ideas, communicating, and learning new concepts. Vocabulary has been described as "the glue that holds stories, ideas, and content together...making comprehension accessible".[27]This greatly reflects the important role that vocabulary plays. Especially when studying various pieces of literature, it is important to have this background vocabulary, otherwise readers will become lost rather quickly. Because of this, teachers focus a great deal of attention to vocabulary programs and implementing them into their weekly lesson plans.
Initially most comprehension teaching was that when taken together it would allow students to be imparted through selected techniques for each genre by strategic readers. However, from the 1930s testing various methods never seemed to win support in empirical research. One such strategy for improving reading comprehension is the technique calledSQ3Rintroduced by Francis Pleasant Robinson in his 1946 bookEffective Study.[28]
Between 1969 and 2000, a number of "strategies" were devised for teaching students to employ self-guided methods for improving reading comprehension. In 1969 Anthony V. Manzo designed and found empirical support for the Re Quest, orReciprocal Questioning Procedure, in traditional teacher-centered approach due to its sharing of "cognitive secrets". It was the first method to convert a fundamental theory such associal learninginto teaching methods through the use ofcognitive modelingbetween teachers and students.[29]
Since the turn of the 20th century, comprehension lessons usually consist of students answering teacher's questions or writing responses to questions of their own, or from prompts of the teacher.[30]This detached whole group version only helped students individually to respond to portions of the text (content area reading), and improve their writing skills.[citation needed]In the last quarter of the 20th century, evidence accumulated that academic reading test methods were more successful in assessing rather than imparting comprehension or giving a realistic insight. Instead of using the prior response registering method, research studies have concluded that an effective way to teach comprehension is to teach novice readers a bank of "practical reading strategies" or tools to interpret and analyze various categories and styles of text.[31]
Common Core State Standards (CCSS) have been implemented in hopes that students test scores would improve. Some of the goals of CCSS are directly related to students and their reading comprehension skills, with them being concerned with students learning and noticing key ideas and details, considering the structure of the text, looking at how the ideas are integrated, and reading texts with varying difficulties and complexity.[9]
There are a variety of strategies used to teach reading. Strategies are key to help with reading comprehension. They vary according to the challenges like new concepts, unfamiliar vocabulary, long and complex sentences, etc. Trying to deal with all of these challenges at the same time may be unrealistic. Then again strategies should fit to the ability, aptitude and age level of the learner. Some of the strategies teachers use are: reading aloud, group work, and more reading exercises.[citation needed]
In the 1980s, Annemarie Sullivan Palincsar and Ann L. Brown developed a technique calledreciprocal teachingthat taught students to predict, summarize, clarify, and ask questions for sections of a text. The use of strategies like summarizing after each paragraph has come to be seen as effective for building students' comprehension. The idea is that students will develop stronger reading comprehension skills on their own if the teacher gives them explicit mental tools for unpacking text.[31]
"Instructional conversations", or comprehension through discussion, create higher-level thinking opportunities for students by promotingcriticalandaesthetic thinkingabout the text. According toVivian Thayer, class discussions help students to generate ideas and new questions. (Goldenberg, p. 317).
Dr. Neil Postman has said, "All our knowledge results from questions, which is another way of saying that question-asking is our most important intellectual tool"[32](Response to Intervention). There are several types of questions that a teacher should focus on: remembering, testing, understanding, application or solving, invite synthesis or creating, evaluation and judging. Teachers should model these types of questions through "think-alouds" before, during, and after reading a text. When a student can relate a passage to an experience, another book, or other facts about the world, they are "making a connection". Making connections help students understand the author's purpose and fiction or non-fiction story.[33]
There are factors that, once discerned, make it easier for the reader to understand the written text. One of such is thegenre, likefolktales,historical fiction,biographiesorpoetry. Each genre has its own characteristics for text structure that once understood helps the reader comprehend it. A story is composed of a plot, characters, setting, point of view, and theme. Informational books provide real-world knowledge for students and have unique features such as: headings, maps, vocabulary, and an index. Poems are written in different forms and the most commonly used are: rhymed verse, haikus, free verse, and narratives. Poetry uses devices such as: alliteration, repetition, rhyme, metaphors, and similes. "When children are familiar with genres, organizational patterns, and text features in books they're reading, they're better able to create those text factors in their own writing." Another one is arranging the text perperceptual spanand a text display favorable to the age level of the reader.[34]
Non-verbal imagery refers to media that utilize schemata to make planned or unplanned connections more commonly used within context such as a passage, an experience, or one's imagination. Some notable examples are emojis, emoticons, cropped and uncropped images, and recently, emojis which are images that are used to elicit humor and comprehension.[35]
Visualization is a "mental image" created in a person's mind while reading text. This "brings words to life" and helps improve reading comprehension. Asking sensory questions will help students become better visualizers.[33]
Students can practice visualizing before seeing the picture of what they are reading by imagining what they "see, hear, smell, taste, or feel" when they are reading a page of a picture book aloud. They can share their visualizations, then check their level of detail against the illustrations.
Partner reading is a strategy created for reading pairs. The teacher chooses two appropriate books for the students to read. First, the pupils and their partners must read their own book. Once they have completed this, they are given the opportunity to write down their own comprehension questions for their partner. The students swap books, read them out loud to one another and ask one another questions about the book they have read.
[36]There are different levels of this strategy:
Students at a very good level are a few years ahead of the other students.
This strategy:
There are a wide range of reading strategies suggested by reading programs and educators. Effective reading strategies may differ for second language learners, as opposed to native speakers.[39][40][41]TheNational Reading Panelidentified positive effects only for a subset, particularly summarizing, asking questions, answering questions, comprehension monitoring, graphic organizers, and cooperative learning. The Panel also emphasized that a combination of strategies, as used in Reciprocal Teaching, can be effective.[33]The use of effective comprehension strategies that provide specific instructions for developing and retaining comprehension skills, with intermittent feedback, has been found to improve reading comprehension across all ages, specifically those affected by mental disabilities.[42]
Reading different types of texts requires the use of different reading strategies and approaches. Making reading an active, observable process can be very beneficial to struggling readers. A good reader interacts with the text in order to develop an understanding of the information before them. Some good reader strategies are predicting, connecting, inferring, summarizing, analyzing and critiquing. There are many resources and activities educators and instructors of reading can use to help with reading strategies in specific content areas and disciplines. Some examples are graphic organizers, talking to the text, anticipation guides, double entry journals, interactive reading and note taking guides, chunking, and summarizing.[citation needed][7Habits 1]
The use of effective comprehension strategies is highly important when learning to improve reading comprehension. These strategies provide specific instructions for developing and retaining comprehension skills across all ages.[42]Applying methods to attain an overtphonemic awarenesswith intermittent practice has been found to improve reading in early ages, specifically those affected by mental disabilities.
A common statistic that researchers have found is the importance of readers, and specifically students, to be interested in what they are reading. It has been reported by students that they are more likely to finish books if they are the ones that choose them.[43]They are also more likely to remember what they read if they were interested as it causes them to pay attention to the minute details.
There are various reading strategies that help readers recognize what they are learning, which allows them to further understand themselves as readers. Also to understand what information they have comprehended. These strategies also activate reading strategies that good readers use when reading and understanding a text.[9]
When reading a passage, it is good to vocalize what one is reading and also their mental processes that are occurring while reading. This can take many different forms, with a few being asking oneself questions about reading or the text, making connections with prior knowledge or prior read texts, noticing when one struggles, and rereading what needs to be.[9]These tasks will help readers think about their reading and if they are understood fully, which helps them notice what changes or tactics might need to be considered.
Know, Want to know, and Learned (KWL) is often used by teachers and their students, but it is a great tactic for all readers when considering their own knowledge. So, the reader goes through the knowledge that they already have, they think about what they want to know or the knowledge they want to gain, and finally they think about what they have learnt after reading. This allows readers to reflect on the prior knowledge they have, and also to recognize what knowledge they have gained and comprehended from their reading.[9]
Research studies on reading and comprehension have shown that highly proficient, effective readers utilize a number of different strategies to comprehend various types of texts, strategies that can also be used by less proficient readers in order to improve their comprehension. These include:
There are informal and formal assessments to monitor an individual's comprehension ability and use of comprehension strategies.[45]Informal assessments are generally conducted through observation and the use of tools, likestory boards,word sorts, andinteractive writing. Many teachers use Formative assessments to determine if a student has mastered content of the lesson. Formative assessments can be verbal as in a "Think-Pair-Share" or "Partner Share". Formative Assessments can also be "Ticket out the door" or "digital summarizers". Formal assessments are district or state assessments that evaluates all students on important skills and concepts. Summative assessments typically, are assessments given at the end of a unit to measure a student's learning.
A popular assessment undertaken in numerous primary schools around the world arerunning records. Running records are a helpful tool in regard to reading comprehension.[47]The tool assists teachers in analyzing specific patterns in student behaviors and planning appropriate instruction. By conducting running records, teachers are given an overview of students' reading abilities and learning over a period of time.
In order for teachers to conduct a running record properly, they must sit beside a student and make sure that the environment is as relaxed as possible so the student does not feel pressured or intimidated. It is best if the running record assessment is conducted during reading, to avoid distractions. Another alternative is asking an education assistant to conduct the running record for you in a separate room whilst you teach/supervise the class. Quietly observe the students' reading and record during this time. There is a specific code for recording which most teachers understand. Once the student has finished reading, ask them to retell the story as best as they can. After the completion of this, ask them comprehensive questions listed to test them on their understanding of the book. At the end of the assessment add up their running record score and file the assessment sheet away. After the completion of the running record assessment, plan strategies that will improve the students' ability to read and understand the text.
Overview of the steps taken when conducting a Running Record assessment:[48]
Some texts, like in philosophy, literature or scientific research, may appear more difficult to read because of the prior knowledge they assume, the tradition from which they come, or the tone, such as criticizing or parodying.[citation needed]A PhilosopherJacques Derrida, explained his opinion about complicated text: "In order to unfold what is implicit in so many discourses, one would have each time to make a pedagogical outlay that is just not reasonable to expect from every book. Here the responsibility has to be shared out, mediated; the reading has to do its work and the work has to make its reader."[49]Other Philosophers however, believe that if you have something to say, you should be able to make the message readable to a wide audience.[50]
Embeddedhyperlinksin documents or Internet pages have been found to make different demands on the reader than traditional text. Authors such asNicholas Carr, and Psychologists, such asMaryanne Wolf, contend that the internet may have a negative impact on attention and reading comprehension.[51]Some studies report increased demands of reading hyperlinked text in terms of cognitive load, or the amount of information actively maintained in one's mind (also seeworking memory).[52]One study showed that going from about 5 hyperlinks per page to about 11 per page reduced college students' understanding (assessed by multiple choice tests) of articles about alternative energy.[53]This can be attributed to the decision-making process (deciding whether to click on it) required by each hyperlink,[52]which may reduce comprehension of surrounding text.
On the other hand, other studies have shown that if a short summary of the link's content is provided when the mouse pointer hovers over it, then comprehension of the text is improved.[54]"Navigation hints" about which links are most relevant improved comprehension.[55]Finally, the background knowledge of the reader can partially determine the effect hyperlinks have on comprehension. In a study of reading comprehension with subjects who were familiar or unfamiliar with art history, texts which were hyperlinked to one another hierarchically were easier for novices to understand than texts which were hyperlinked semantically. In contrast, those already familiar with the topic understood the content equally well with both types of organization.[52]
In interpreting these results, it may be useful to note that the studies mentioned were all performed in closed content environments, not on the internet. That is, the texts used only linked to a predetermined set of other texts which was offline. Furthermore, the participants were explicitly instructed to read on a certain topic in a limited amount of time. Reading text on the internet may not have these constraints.[citation needed]
The National Reading Panel noted that comprehension strategy instruction is difficult for many teachers as well as for students, particularly because they were not taught this way and because it is a demanding task. They suggested that professional development can increase teachers/students willingness to use reading strategies but admitted that much remains to be done in this area.[citation needed]
Thedirected listening and thinking activityis a technique available to teachers to aid students in reading comprehension. It is also difficult for students that are new. There is often some debate when considering the relationship between readingfluencyand reading comprehension. There is evidence of a direct correlation that fluency and comprehension lead to better understanding of the written material, across all ages.[56]TheNational Assessment of Educational Progressassessed U.S. student performance in reading at grade 12 from both public and private school population and found that only 37 percent of students had proficient skills. The majority, 72 percent of the students, were only at or above basic skills, and 28 percent of the students were below basic level.[57]
|
https://en.wikipedia.org/wiki/Reading_comprehension
|
Speech perceptionis the process by which the sounds oflanguageare heard, interpreted, and understood. The study ofspeechperception is closely linked to the fields ofphonologyandphoneticsinlinguisticsandcognitive psychologyandperceptioninpsychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in buildingcomputer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
The process of perceiving speech begins at the level of the sound signal and the process of audition. (For a complete description of the process of audition seeHearing.) After processing the initial auditory signal, speech sounds are further processed to extract acoustic cues and phonetic information. This speech information can then be used for higher-level language processes, such as word recognition.
Acoustic cuesaresensory cuescontained in the speech sound signal which are used in speech perception to differentiate speech sounds belonging to differentphoneticcategories. For example, one of the most studied cues in speech isvoice onset timeor VOT. VOT is a primary cue signaling the difference between voiced and voiceless plosives, such as "b" and "p". Other cues differentiate sounds that are produced at differentplaces of articulationormanners of articulation. The speech system must also combine these cues to determine the category of a specific speech sound. This is often thought of in terms of abstract representations ofphonemes. These representations can then be combined for use in word recognition and other language processes.
It is not easy to identify what acoustic cues listeners are sensitive to when perceiving a particular speech sound:
At first glance, the solution to the problem of how we perceive speech seems deceptively simple. If one could identify stretches of the acoustic waveform that correspond to units of perception, then the path from sound to meaning would be clear. However, this correspondence or mapping has proven extremely difficult to find, even after some forty-five years of research on the problem.[1]
If a specific aspect of the acoustic waveform indicated one linguistic unit, a series of tests using speech synthesizers would be sufficient to determine such a cue or cues. However, there are two significant obstacles:
Although listeners perceive speech as a stream of discrete units[citation needed](phonemes,syllables, andwords), this linearity is difficult to see in the physical speech signal (see Figure 2 for an example). Speech sounds do not strictly follow one another, rather, they overlap.[5]A speech sound is influenced by the ones that precede and the ones that follow. This influence can even be exerted at a distance of two or more segments (and across syllable- and word-boundaries).[5]
Because the speech signal is not linear, there is a problem of segmentation. It is difficult to delimit a stretch of speech signal as belonging to a single perceptual unit. As an example, the acoustic properties of the phoneme/d/will depend on the production of the following vowel (because ofcoarticulation).
The research and application of speech perception must deal with several problems which result from what has been termed the lack of invariance. Reliable constant relations between a phoneme of a language and its acoustic manifestation in speech are difficult to find. There are several reasons for this:
Phonetic environment affects the acoustic properties of speech sounds. For example,/u/in English is fronted when surrounded bycoronal consonants.[6]Or, thevoice onset timemarking the boundary between voiced and voiceless plosives are different for labial, alveolar and velar plosives and they shift under stress or depending on the position within a syllable.[7]
One important factor that causes variation is differing speech rate. Many phonemic contrasts are constituted by temporal characteristics (short vs. long vowels or consonants, affricates vs. fricatives, plosives vs. glides, voiced vs. voiceless plosives, etc.) and they are certainly affected by changes inspeaking tempo.[1]Another major source of variation is articulatory carefulness vs. sloppiness which is typical for connected speech (articulatory "undershoot" is obviously reflected in the acoustic properties of the sounds produced).
The resulting acoustic structure of concrete speech productions depends on the physical and psychological properties of individual speakers. Men, women, and children generally produce voices having different pitch. Because speakers have vocal tracts of different sizes (due to sex and age especially) the resonant frequencies (formants), which are important for recognition of speech sounds, will vary in their absolute values across individuals[8](see Figure 3 for an illustration of this). Research shows that infants at the age of 7.5 months cannot recognize information presented by speakers of different genders; however by the age of 10.5 months, they can detect the similarities.[9]Dialect and foreign accent can also cause variation, as can the social characteristics of the speaker and listener.[10]
Despite the great variety of different speakers and different conditions, listeners perceive vowels and consonants as constant categories. It has been proposed that this is achieved by means of the perceptual normalization process in which listeners filter out the noise (i.e. variation) to arrive at the underlying category. Vocal-tract-size differences result in formant-frequency variation across speakers; therefore a listener has to adjust his/her perceptual system to the acoustic characteristics of a particular speaker. This may be accomplished by considering the ratios of formants rather than their absolute values.[11][12][13]This process has been called vocal tract normalization (see Figure 3 for an example). Similarly, listeners are believed to adjust the perception of duration to the current tempo of the speech they are listening to – this has been referred to as speech rate normalization.
Whether or not normalization actually takes place and what is its exact nature is a matter of theoretical controversy (seetheoriesbelow).Perceptual constancyis a phenomenon not specific to speech perception only; it exists in other types of perception too.
Categorical perception is involved in processes of perceptual differentiation. People perceive speech sounds categorically, that is to say, they are more likely to notice the differencesbetweencategories (phonemes) thanwithincategories. The perceptual space between categories is therefore warped, the centers of categories (or "prototypes") working like a sieve[14]or like magnets[15]for incoming speech sounds.
In an artificial continuum between a voiceless and a voicedbilabial plosive, each new step differs from the preceding one in the amount ofVOT. The first sound is apre-voiced[b], i.e. it has a negative VOT. Then, increasing the VOT, it reaches zero, i.e. the plosive is a plainunaspiratedvoiceless[p]. Gradually, adding the same amount of VOT at a time, the plosive is eventually a strongly aspirated voiceless bilabial[pʰ]. (Such a continuum was used in an experiment byLiskerandAbramsonin 1970.[16]The sounds they used areavailable online.) In this continuum of, for example, seven sounds, native English listeners will identify the first three sounds as/b/and the last three sounds as/p/with a clear boundary between the two categories.[16]A two-alternative identification (or categorization) test will yield a discontinuous categorization function (see red curve in Figure 4).
In tests of the ability to discriminate between two sounds with varying VOT values but having a constant VOT distance from each other (20 ms for instance), listeners are likely to perform at chance level if both sounds fall within the same category and at nearly 100% level if each sound falls in a different category (see the blue discrimination curve in Figure 4).
The conclusion to make from both the identification and the discrimination test is that listeners will have different sensitivity to the same relative increase in VOT depending on whether or not the boundary between categories was crossed. Similar perceptual adjustment is attested for other acoustic cues as well.
In a classic experiment, Richard M. Warren (1970) replaced one phoneme of a word with a cough-like sound. Perceptually, his subjects restored the missing speech sound without any difficulty and could not accurately identify which phoneme had been disturbed,[17]a phenomenon known as thephonemic restoration effect. Therefore, the process of speech perception is not necessarily uni-directional.
Another basic experiment compared recognition of naturally spoken words within a phrase versus the same words in isolation, finding that perception accuracy usually drops in the latter condition. To probe the influence of semantic knowledge on perception, Garnes and Bond (1976) similarly used carrier sentences where target words only differed in a single phoneme (bay/day/gay, for example) whose quality changed along a continuum. When put into different sentences that each naturally led to one interpretation, listeners tended to judge ambiguous words according to the meaning of the whole sentence[18].[19]That is, higher-level language processes connected withmorphology,syntax, orsemanticsmay interact with basic speech perception processes to aid in recognition of speech sounds.
It may be the case that it is not necessary and maybe even not possible for a listener to recognize phonemes before recognizing higher units, like words for example. After obtaining at least a fundamental piece of information about phonemic structure of the perceived entity from the acoustic signal, listeners can compensate for missing or noise-masked phonemes using their knowledge of the spoken language. Compensatory mechanisms might even operate at the sentence level such as in learned songs, phrases and verses, an effect backed-up byneural codingpatterns consistent with the missed continuous speech fragments,[20]despite the lack of all relevant bottom-up sensory input.
The first ever hypothesis of speech perception was used with patients who acquired an auditory comprehension deficit, also known asreceptive aphasia. Since then there have been many disabilities that have been classified, which resulted in a true definition of "speech perception".[21]The term 'speech perception' describes the process of interest that employs sub lexical contexts to the probe process. It consists of many different language and grammatical functions, such as: features, segments (phonemes), syllabic structure (unit of pronunciation), phonological word forms (how sounds are grouped together), grammatical features, morphemic (prefixes and suffixes), and semantic information (the meaning of the words).
In the early years, they were more interested in the acoustics of speech. For instance, they were looking at the differences between /ba/ or /da/, but now research has been directed to the response in the brain from the stimuli. In recent years, there has been a model developed to create a sense of how speech perception works; this model is known as the dual stream model. This model has drastically changed from how psychologists look at perception. The first section of the dual stream model is the ventral pathway. This pathway incorporates middle temporal gyrus, inferior temporal sulcus and perhaps theinferior temporal gyrus. The ventral pathway shows phonological representations to the lexical or conceptual representations, which is the meaning of the words. The second section of the dual stream model is the dorsal pathway. This pathway includes the sylvian parietotemporal, inferior frontal gyrus, anterior insula, and premotor cortex. Its primary function is to take the sensory or phonological stimuli and transfer it into an articulatory-motor representation (formation of speech).[22]
Aphasia is an impairment oflanguage processingcaused by damage to the brain. Different parts of language processing are impacted depending on the area of the brain that is damaged, and aphasia is further classified based on the location of injury or constellation of symptoms. Damage toBroca's areaof the brain often results inexpressive aphasiawhich manifests as impairment in speech production. Damage toWernicke's areaoften results inreceptive aphasiawhere speech processing is impaired.[23]
Aphasia with impaired speech perception typically shows lesions or damage located in the lefttemporalorparietal lobes. Lexical and semantic difficulties are common, and comprehension may be affected.[23]
Agnosiais "the loss or diminution of the ability to recognize familiar objects or stimuli usually as a result of brain damage".[24]There are several different kinds of agnosia that affect every one of our senses, but the two most common related to speech arespeech agnosiaandphonagnosia.
Speech agnosia: Pure word deafness, or speech agnosia, is an impairment in which a person maintains the ability to hear, produce speech, and even read speech, yet they are unable to understand or properly perceive speech. These patients seem to have all of the skills necessary in order to properly process speech, yet they appear to have no experience associated with speech stimuli. Patients have reported, "I can hear you talking, but I can't translate it".[25]Even though they are physically receiving and processing the stimuli of speech, without the ability to determine the meaning of the speech, they essentially are unable to perceive the speech at all. There are no known treatments that have been found, but from case studies and experiments it is known that speech agnosia is related to lesions in the left hemisphere or both, specifically right temporoparietal dysfunctions.[26]
Phonagnosia:Phonagnosiais associated with the inability to recognize any familiar voices. In these cases, speech stimuli can be heard and even understood but the association of the speech to a certain voice is lost. This can be due to "abnormal processing of complex vocal properties (timbre, articulation, and prosody—elements that distinguish an individual voice".[27]There is no known treatment; however, there is a case report of an epileptic woman who began to experience phonagnosia along with other impairments. Her EEG and MRI results showed "a right cortical parietal T2-hyperintense lesion without gadolinium enhancement and with discrete impairment of water molecule diffusion".[27]So although no treatment has been discovered, phonagnosia can be correlated to postictal parietal cortical dysfunction.
Infants begin the process oflanguage acquisitionby being able to detect very small differences between speech sounds. They can discriminate all possible speech contrasts (phonemes). Gradually, as they are exposed to their native language, their perception becomes language-specific, i.e. they learn how to ignore the differences within phonemic categories of the language (differences that may well be contrastive in other languages – for example, English distinguishes two voicing categories ofplosives, whereasThai has three categories; infants must learn which differences are distinctive in their native language uses, and which are not). As infants learn how to sort incoming speech sounds into categories, ignoring irrelevant differences and reinforcing the contrastive ones, their perception becomescategorical. Infants learn to contrast different vowel phonemes of their native language by approximately 6 months of age. The native consonantal contrasts are acquired by 11 or 12 months of age.[28]Some researchers have proposed that infants may be able to learn the sound categories of their native language through passive listening, using a process calledstatistical learning. Others even claim that certain sound categories are innate, that is, they are genetically specified (see discussion aboutinnate vs. acquired categorical distinctiveness).
If day-old babies are presented with their mother's voice speaking normally, abnormally (in monotone), and a stranger's voice, they react only to their mother's voice speaking normally. When a human and a non-human sound is played, babies turn their head only to the source of human sound. It has been suggested that auditory learning begins already in the pre-natal period.[29]
One of the techniques used to examine how infants perceive speech, besides the head-turn procedure mentioned above, is measuring their sucking rate. In such an experiment, a baby is sucking a special nipple while presented with sounds. First, the baby's normal sucking rate is established. Then a stimulus is played repeatedly. When the baby hears the stimulus for the first time the sucking rate increases but as the baby becomeshabituatedto the stimulation the sucking rate decreases and levels off. Then, a new stimulus is played to the baby. If the baby perceives the newly introduced stimulus as different from the background stimulus the sucking rate will show an increase.[29]The sucking-rate and the head-turn method are some of the more traditional, behavioral methods for studying speech perception. Among the new methods (seeResearch methodsbelow) that help us to study speech perception,near-infrared spectroscopyis widely used in infants.[28]
It has also been discovered that even though infants' ability to distinguish between the different phonetic properties of various languages begins to decline around the age of nine months, it is possible to reverse this process by exposing them to a new language in a sufficient way. In a research study by Patricia K. Kuhl, Feng-Ming Tsao, and Huei-Mei Liu, it was discovered that if infants are spoken to and interacted with by a native speaker of Mandarin Chinese, they can actually be conditioned to retain their ability to distinguish different speech sounds within Mandarin that are very different from speech sounds found within the English language. Thus proving that given the right conditions, it is possible to prevent infants' loss of the ability to distinguish speech sounds in languages other than those found in the native language.[30]
A large amount of research has studied how users of a language perceiveforeignspeech (referred to as cross-language speech perception) orsecond-languagespeech (second-language speech perception). The latter falls within the domain ofsecond language acquisition.
Languages differ in their phonemic inventories. Naturally, this creates difficulties when a foreign language is encountered. For example, if two foreign-language sounds are assimilated to a single mother-tongue category the difference between them will be very difficult to discern. A classic example of this situation is the observation that Japanese learners of English will have problems with identifying or distinguishing Englishliquid consonants/l/and/r/(seePerception of English /r/ and /l/ by Japanese speakers).[31]
Best (1995) proposed a Perceptual Assimilation Model which describes possible cross-language category assimilation patterns and predicts their consequences.[32]Flege (1995) formulated a Speech Learning Model which combines several hypotheses about second-language (L2) speech acquisition and which predicts, in simple words, that an L2 sound that is not too similar to a native-language (L1) sound will be easier to acquire than an L2 sound that is relatively similar to an L1 sound (because it will be perceived as more obviously "different" by the learner).[33]
Research in how people with language or hearing impairment perceive speech is not only intended to discover possible treatments. It can provide insight into the principles underlying non-impaired speech perception.[34]Two areas of research can serve as an example:
Aphasiaaffects both the expression and reception of language. Both two most common types,expressive aphasiaandreceptive aphasia, affect speech perception to some extent. Expressive aphasia causes moderate difficulties for language understanding. The effect of receptive aphasia on understanding is much more severe. It is agreed upon, that aphasics suffer from perceptual deficits. They usually cannot fully distinguish place of articulation and voicing.[35]As for other features, the difficulties vary. It has not yet been proven whether low-level speech-perception skills are affected in aphasia sufferers or whether their difficulties are caused by higher-level impairment alone.[35]
Cochlear implantationrestores access to the acoustic signal in individuals with sensorineural hearing loss. The acoustic information conveyed by an implant is usually sufficient for implant users to properly recognize speech of people they know even without visual clues.[36]For cochlear implant users, it is more difficult to understand unknown speakers and sounds. The perceptual abilities of children that received an implant after the age of two are significantly better than of those who were implanted in adulthood. A number of factors have been shown to influence perceptual performance, specifically: duration of deafness prior to implantation, age of onset of deafness, age at implantation (such age effects may be related to theCritical period hypothesis) and the duration of using an implant. There are differences between children with congenital and acquired deafness. Postlingually deaf children have better results than the prelingually deaf and adapt to a cochlear implant faster.[36]In both children with cochlear implants and normal hearing, vowels and voice onset time becomes prevalent in development before the ability to discriminate the place of articulation. Several months following implantation, children with cochlear implants can normalize speech perception.
One of the fundamental problems in the study of speech is how to deal with noise. This is shown by the difficulty in recognizing human speech that computer recognition systems have. While they can do well at recognizing speech if trained on a specific speaker's voice and under quiet conditions, these systems often do poorly in more realistic listening situations where humans would understand speech with relative ease. To emulate processing patterns that would be held in the brain under normal conditions, prior knowledge is a key neural factor, since a robustlearninghistory may to an extent override the extreme masking effects involved in the complete absence of continuous speech signals.[20]
Research into the relationship betweenmusic and cognitionis an emerging field related to the study of speech perception. Originally it was theorized that the neural signals for music were processed in a specialized "module" in the right hemisphere of the brain. Conversely, the neural signals for language were to be processed by a similar "module" in the left hemisphere.[37]However, utilizing technologies such as fMRI machines, research has shown that two regions of the brain traditionally considered exclusively to process speech, Broca's and Wernicke's areas, also become active during musical activities such as listening to a sequence of musical chords.[37]Other studies, such as one performed by Marques et al. in 2006 showed that 8-year-olds who were given six months of musical training showed an increase in both their pitch detection performance and their electrophysiological measures when made to listen to an unknown foreign language.[38]
Conversely, some research has revealed that, rather than music affecting our perception of speech, our native speech can affect our perception of music. One example is thetritone paradox. The tritone paradox is where a listener is presented with two computer-generated tones (such as C and F-Sharp) that are half an octave (or a tritone) apart and are then asked to determine whether the pitch of the sequence is descending or ascending. One such study, performed by Ms. Diana Deutsch, found that the listener's interpretation of ascending or descending pitch was influenced by the listener's language or dialect, showing variation between those raised in the south of England and those in California or from those in Vietnam and those in California whose native language was English.[37]A second study, performed in 2006 on a group of English speakers and 3 groups of East Asian students at University of Southern California, discovered that English speakers who had begun musical training at or before age 5 had an 8% chance of having perfect pitch.[37]
Casey O'Callaghan, in his articleExperiencing Speech, analyzes whether "the perceptual experience of listening to speech differs in phenomenal character"[39]with regards to understanding the language being heard. He argues that an individual's experience when hearing a language they comprehend, as opposed to their experience when hearing a language they have no knowledge of, displays a difference inphenomenal featureswhich he defines as "aspects of what an experience is like"[39]for an individual.
If a subject who is a monolingual native English speaker is presented with a stimulus of speech in German, the string of phonemes will appear as mere sounds and will produce a very different experience than if exactly the same stimulus was presented to a subject who speaks German.
He also examines how speech perception changes when one learning a language. If a subject with no knowledge of the Japanese language was presented with a stimulus of Japanese speech, and then was given the exactsamestimuli after being taught Japanese, thissameindividual would have an extremelydifferentexperience.
The methods used in speech perception research can be roughly divided into three groups: behavioral, computational, and, more recently, neurophysiological methods.
Behavioral experiments are based on an active role of a participant, i.e. subjects are presented with stimuli and asked to make conscious decisions about them. This can take the form of an identification test, adiscrimination test, similarity rating, etc. These types of experiments help to provide a basic description of how listeners perceive and categorize speech sounds.
Speech perception has also been analyzed through sinewave speech, a form of synthetic speech where the human voice is replaced by sine waves that mimic the frequencies and amplitudes present in the original speech. When subjects are first presented with this speech, the sinewave speech is interpreted as random noises. But when the subjects are informed that the stimuli actually is speech and are told what is being said, "a distinctive, nearly immediate shift occurs"[39]to how the sinewave speech is perceived.
Computational modeling has also been used to simulate how speech may be processed by the brain to produce behaviors that are observed. Computer models have been used to address several questions in speech perception, including how the sound signal itself is processed to extract the acoustic cues used in speech, and how speech information is used for higher-level processes, such as word recognition.[40]
Neurophysiological methods rely on utilizing information stemming from more direct and not necessarily conscious (pre-attentative) processes. Subjects are presented with speech stimuli in different types of tasks and the responses of the brain are measured. The brain itself can be more sensitive than it appears to be through behavioral responses. For example, the subject may not show sensitivity to the difference between two speech sounds in a discrimination test, but brain responses may reveal sensitivity to these differences.[28]Methods used to measure neural responses to speech includeevent-related potentials,magnetoencephalography, andnear infrared spectroscopy. One important response used withevent-related potentialsis themismatch negativity, which occurs when speech stimuli are acoustically different from a stimulus that the subject heard previously.
Neurophysiological methods were introduced into speech perception research for several reasons:
Behavioral responses may reflect late, conscious processes and be affected by other systems such as orthography, and thus they may mask speaker's ability to recognize sounds based on lower-level acoustic distributions.[41]
Without the necessity of taking an active part in the test, even infants can be tested; this feature is crucial in research into acquisition processes. The possibility to observe low-level auditory processes independently from the higher-level ones makes it possible to address long-standing theoretical issues such as whether or not humans possess a specialized module for perceiving speech[42][43]or whether or not some complex acoustic invariance (seelack of invarianceabove) underlies the recognition of a speech sound.[44]
Some of the earliest work in the study of how humans perceive speech sounds was conducted byAlvin Libermanand his colleagues atHaskins Laboratories.[45]Using a speech synthesizer, they constructed speech sounds that varied inplace of articulationalong a continuum from/bɑ/to/dɑ/to/ɡɑ/. Listeners were asked to identify which sound they heard and to discriminate between two different sounds. The results of the experiment showed that listeners grouped sounds into discrete categories, even though the sounds they were hearing were varying continuously. Based on these results, they proposed the notion ofcategorical perceptionas a mechanism by which humans can identify speech sounds.
More recent research using different tasks and methods suggests that listeners are highly sensitive to acoustic differences within a single phonetic category, contrary to a strict categorical account of speech perception.
To provide a theoretical account of thecategorical perceptiondata, Liberman and colleagues[46]worked out the motor theory of speech perception, where "the complicated articulatory encoding was assumed to be decoded in the perception of speech by the same processes that are involved in production"[1](this is referred to as analysis-by-synthesis). For instance, the English consonant/d/may vary in its acoustic details across different phonetic contexts (seeabove), yet all/d/'s as perceived by a listener fall within one category (voiced alveolar plosive) and that is because "linguistic representations are abstract, canonical, phonetic segments or the gestures that underlie these segments".[1]When describing units of perception, Liberman later abandoned articulatory movements and proceeded to the neural commands to the articulators[47]and even later to intended articulatory gestures,[48]thus "the neural representation of the utterance that determines the speaker's production is the distal object the listener perceives".[48]The theory is closely related to themodularityhypothesis, which proposes the existence of a special-purpose module, which is supposed to be innate and probably human-specific.
The theory has been criticized in terms of not being able to "provide an account of just how acoustic signals are translated into intended gestures"[49]by listeners. Furthermore, it is unclear how indexical information (e.g. talker-identity) is encoded/decoded along with linguistically relevant information.
Exemplar models of speech perception differ from the four theories mentioned above which suppose that there is no connection between word- and talker-recognition and that the variation across talkers is "noise" to be filtered out.
The exemplar-based approaches claim listeners store information for both word- and talker-recognition. According to this theory, particular instances of speech sounds are stored in the memory of a listener. In the process of speech perception, the remembered instances of e.g. a syllable stored in the listener's memory are compared with the incoming stimulus so that the stimulus can be categorized. Similarly, when recognizing a talker, all the memory traces of utterances produced by that talker are activated and the talker's identity is determined. Supporting this theory are several experiments reported by Johnson[13]that suggest that our signal identification is more accurate when we are familiar with the talker or when we have visual representation of the talker's gender. When the talker is unpredictable or the sex misidentified, the error rate in word-identification is much higher.
The exemplar models have to face several objections, two of which are (1) insufficient memory capacity to store every utterance ever heard and, concerning the ability to produce what was heard, (2) whether also the talker's own articulatory gestures are stored or computed when producing utterances that would sound as the auditory memories.[13][49]
Kenneth N. Stevensproposed acoustic landmarks anddistinctive featuresas a relation between phonological features and auditory properties. According to this view, listeners are inspecting the incoming signal for the so-called acoustic landmarks which are particular events in the spectrum carrying information about gestures which produced them. Since these gestures are limited by the capacities of humans' articulators and listeners are sensitive to their auditory correlates, thelack of invariancesimply does not exist in this model. The acoustic properties of the landmarks constitute the basis for establishing the distinctive features. Bundles of them uniquely specify phonetic segments (phonemes, syllables, words).[50]
In this model, the incoming acoustic signal is believed to be first processed to determine the so-called landmarks which are specialspectralevents in the signal; for example, vowels are typically marked by higher frequency of the first formant, consonants can be specified as discontinuities in the signal and have lower amplitudes in lower and middle regions of the spectrum. These acoustic features result from articulation. In fact, secondary articulatory movements may be used when enhancement of the landmarks is needed due to external conditions such as noise. Stevens claims thatcoarticulationcauses only limited and moreover systematic and thus predictable variation in the signal which the listener is able to deal with. Within this model therefore, what is called thelack of invarianceis simply claimed not to exist.
Landmarks are analyzed to determine certain articulatory events (gestures) which are connected with them. In the next stage, acoustic cues are extracted from the signal in the vicinity of the landmarks by means of mental measuring of certain parameters such as frequencies of spectral peaks, amplitudes in low-frequency region, or timing.
The next processing stage comprises acoustic-cues consolidation and derivation of distinctive features. These are binary categories related to articulation (for example [+/- high], [+/- back], [+/- round lips] for vowels; [+/- sonorant], [+/- lateral], or [+/- nasal] for consonants.
Bundles of these features uniquely identify speech segments (phonemes, syllables, words). These segments are part of the lexicon stored in the listener's memory. Its units are activated in the process of lexical access and mapped on the original signal to find out whether they match. If not, another attempt with a different candidate pattern is made. In this iterative fashion, listeners thus reconstruct the articulatory events which were necessary to produce the perceived speech signal. This can be therefore described as analysis-by-synthesis.
This theory thus posits that thedistal objectof speech perception are the articulatory gestures underlying speech. Listeners make sense of the speech signal by referring to them. The model belongs to those referred to as analysis-by-synthesis.
The fuzzy logical theory of speech perception developed byDominic Massaro[51]proposes that people remember speech sounds in a probabilistic, or graded, way. It suggests that people remember descriptions of the perceptual units of language, called prototypes. Within each prototype various features may combine. However, features are not just binary (true or false), there is afuzzyvalue corresponding to how likely it is that a sound belongs to a particular speech category. Thus, when perceiving a speech signal our decision about what we actually hear is based on the relative goodness of the match between the stimulus information and values of particular prototypes. The final decision is based on multiple features or sources of information, even visual information (this explains theMcGurk effect).[49]Computer models of the fuzzy logical theory have been used to demonstrate that the theory's predictions of how speech sounds are categorized correspond to the behavior of human listeners.[52]
Speech mode hypothesis is the idea that the perception of speech requires the use of specialized mental processing.[53][54]The speech mode hypothesis is a branch off of Fodor's modularity theory (seemodularity of mind). It utilizes a vertical processing mechanism where limited stimuli are processed by special-purpose areas of the brain that are stimuli specific.[54]
Two versions of speech mode hypothesis:[53]
Three important experimental paradigms have evolved in the search to find evidence for the speech mode hypothesis. These aredichotic listening,categorical perception, andduplex perception.[53]Through the research in these categories it has been found that there may not be a specific speech mode but instead one for auditory codes that require complicated auditory processing. Also it seems that modularity is learned in perceptual systems.[53]Despite this the evidence and counter-evidence for the speech mode hypothesis is still unclear and needs further research.
The direct realist theory of speech perception (mostly associated withCarol Fowler) is a part of the more general theory ofdirect realism, which postulates that perception allows us to have direct awareness of the world because it involves direct recovery of thedistal sourceof the event that is perceived. For speech perception, the theory asserts that theobjects of perceptionare actual vocal tract movements, or gestures, and not abstract phonemes or (as in the Motor Theory) events that are causally antecedent to these movements, i.e. intended gestures. Listeners perceive gestures not by means of a specialized decoder (as in the Motor Theory) but because information in the acoustic signal specifies the gestures that form it.[55]By claiming that the actual articulatory gestures that produce different speech sounds are themselves the units of speech perception, the theory bypasses the problem oflack of invariance.
|
https://en.wikipedia.org/wiki/Speech_perception
|
Incomputer programming,feature-oriented programming(FOP) orfeature-oriented software development(FOSD) is aprogramming paradigmfor program generation insoftware product lines(SPLs) and for incremental development of programs.
FOSD arose out of layer-based designs and levels of abstraction in network protocols and extensible database systems in the late-1980s.[1]A program was a stack of layers. Each layer added functionality to previously composed layers and different compositions of layers produced different programs. Not surprisingly, there was a need for a compact language to express such designs. Elementary algebra fit the bill: each layer was a function (aprogram transformation) that added new code to an existing program to produce a new program, and a program's design was modeled by an expression, i.e., a composition of transformations (layers). The figure to the left illustrates the stacking of layers i, j, and h (where h is on the bottom and i is on the top). The algebraic notations i(j(h)), i•j•h, and i+j+h have been used to express these designs.
Over time, layers were equated to features, where afeatureis an increment in program functionality. The paradigm for program design and generation was recognized to be an outgrowth of relational query optimization, where query evaluation programs were defined as relational algebra expressions, andquery optimizationwas expression optimization.[2]A software product line is a family of programs where each program is defined by a unique composition of features. FOSD has since evolved into the study of feature modularity, tools, analyses, and design techniques to support feature-based program generation.
The second generation of FOSD research was on feature interactions, which originated in telecommunications. Later, the termfeature-oriented programmingwas coined;[3]this work exposed interactions between layers. Interactions require features to be adapted when composed with other features.
A third generation of research focussed on the fact that every program has multiple representations (e.g., source, makefiles, documentation, etc.) and adding a feature to a program should elaborate each of its representations so that all are consistent. Additionally, some of representations could be generated (or derived) from others. In the sections below, the mathematics of the three most recent generations of FOSD, namelyGenVoca,[1]AHEAD,[4]andFOMDD[5][6]are described, and links to product lines that have been developed using FOSD tools are provided. Also, four additional results that apply to all generations of FOSD are:FOSD metamodels,FOSD program cubes, and FOSD feature interactions.
GenVoca(aportmanteauof the names Genesis and Avoca)[1]is a compositional paradigm for defining programs of product lines. Base programs are 0-ary functions or transformations calledvalues:
and features are unary functions/transformations that elaborate (modify, extend, refine) a program:
where + denotes function composition. Thedesignof a program is a named expression, e.g.:
AGenVoca modelof a domain or software product line is a collection of base programs and features (seeMetaModelsandProgram Cubes).
The programs (expressions) that can be created defines a product line. Expression optimization isprogram design optimization, and expression evaluation isprogram generation.
GenVoca features were originally implemented using C preprocessor (#ifdef feature ... #endif) techniques. A more advanced technique, calledmixin layers, showed the connection of features to object-oriented collaboration-based designs.
Algebraic Hierarchical Equations for Application Design(AHEAD)[4]generalized GenVoca in two ways. First, it revealed the internal structure of GenVoca values as tuples. Every program has multiple representations, such as source, documentation, bytecode, and makefiles. A GenVoca value is a tuple of program representations. In a product line of parsers, for example, a base parser f is defined by its grammar gf, Java source sf, and documentation df. Parser f is modeled by the tuple f=[gf, sf, df]. Each program representation may have subrepresentations, and they too may have subrepresentations, recursively. In general, a GenVoca value is a tuple of nested tuples that define a hierarchy of representations for a particular program.
Example. Suppose terminal representations are files. In AHEAD, grammar gfcorresponds to a single BNF file, source sfcorresponds to a tuple of Java files [c1…cn], and documentation dfis a tuple of HTML files [h1…hk]. A GenVoca value (nested tuples) can be depicted as a directed graph: the graph for parser f is shown in the figure to the right. Arrows denote projections, i.e., mappings from a tuple to one of its components. AHEAD implements tuples as file directories, so f is a directory containing file gfand subdirectories sfand df. Similarly, directory sfcontains files c1…cn, and directory df contains files h1…hk.
Second, AHEAD expresses features as nested tuples of unary functions calleddeltas. Deltas can beprogram refinements(semantics-preserving transformations),extensions(semantics-extending transformations),
orinteractions(semantics-altering transformations). We use the neutral term “delta” to represent all of these possibilities, as each occurs in FOSD.
To illustrate, suppose feature j extends a grammar by Δgj(new rules and tokens are added), extends source code by Δsj(new classes and members are added and existing methods are modified), and extends documentation by Δdj. The tuple of deltas for feature j is modeled by j=[Δgj,Δsj,Δdj], which we call adelta tuple. Elements of delta tuples can themselves be delta tuples. Example: Δsjrepresents the changes that are made to each class in sfby feature j, i.e., Δsj=[Δc1…Δcn].
The representations of a program are computed recursively by nested vector addition. The representations for parser p2(whose GenVoca expression is j+f) are:
That is, the grammar of p2is the base grammar composed with its extension (Δgj+gf), the source of p2is the base source composed with its extension (Δsj+sf), and so on. As elements of delta tuples can themselves be delta tuples, composition recurses, e.g., Δsj+sf=
[Δc1…Δcn]+[c1…cn]=[Δc1+c1…Δcn+cn].
Summarizing, GenVoca values are nested tuples of program artifacts, and features are nested delta tuples, where + recursively composes them by vector addition. This is the essence of AHEAD.
The ideas presented above concretely expose two FOSD principles. ThePrinciple of Uniformitystates that all program artifacts are treated and modified in the same way. (This is evidenced by deltas for different artifact types above). ThePrinciple of Scalabilitystates all levels of abstractions are treated uniformly. (This gives rise to the hierarchical nesting of tuples above).
The original implementation of AHEAD is the AHEAD Tool Suite and Jak language, which exhibits both the Principles of Uniformity and Scalability. Next-generation tools include CIDE[10]and FeatureHouse.[11]
Feature-Oriented Model-Driven Design(FOMDD)[5][6]combines the ideas of AHEAD withModel-Driven Design(MDD) (a.k.a.Model-Driven Architecture(MDA)). AHEAD functions capture the lockstep update of program artifacts when a feature is added to a program. But there are other functional relationships among program artifacts that express derivations. For example, the relationship between a grammar gfand its parser source sfis defined by a compiler-compiler tool, e.g., javacc. Similarly, the relationship between Java source sfand its bytecode bfis defined by the javac compiler. Acommuting diagramexpresses these relationships. Objects are program representations, downward arrows are derivations, and horizontal arrows are deltas. The figure to the right shows the commuting diagram for program p3= i+j+h = [g3,s3,b3].
A fundamental property of acommuting diagramis that all paths between two objects are equivalent. For example, one way to derive the bytecode b3of parser p3(lower right object in the figure to the right) from grammar ghof parser h (upper left object) is to derive the bytecode bhand refine to b3, while another way refines ghto g3, and then derive b3, where + represents delta composition and () is function or tool application:
There are(42){\displaystyle {\tbinom {4}{2}}}possible paths to derive the bytecode b3of parser p3from the grammar ghof parser h. Each path represents ametaprogramwhose execution generates the target object (b3) from the starting object (gf).
There is a potential optimization: traversing each arrow of acommuting diagramhas a cost. The cheapest (i.e., shortest) path between two objects in acommuting diagramis ageodesic, which represents the most efficient metaprogram that produces the target object from a given object.
Commuting diagramsare important for at least two reasons: (1) there is the possibility of optimizing the generation of artifacts (e.g., geodesics) and (2) they specify different ways of constructing a target object from a starting object.[5][12]A path through a diagram corresponds to a tool chain: for an FOMDD model to be consistent, it should be proven (or demonstrated through testing) that all tool chains that map one object to another in fact yield equivalent results. If this is not the case, then either there is a bug in one or more of the tools or the FOMDD model is wrong.
|
https://en.wikipedia.org/wiki/Feature-oriented_programming
|
GitHub Copilotis acode completionandautomatic programmingtool developed byGitHubandOpenAIthat assists users ofVisual Studio Code,Visual Studio,Neovim, andJetBrainsintegrated development environments(IDEs) byautocompletingcode.[1]Currently available by subscription to individual developers and to businesses, thegenerative artificial intelligencesoftware was first announced by GitHub on 29 June 2021.[2]Users can choose thelarge language modelused for generation.[3]
On June 29, 2021, GitHub announced GitHub Copilot for technical preview in the Visual Studio Code development environment.[1][4]GitHub Copilot was released as apluginon the JetBrains marketplace on October 29, 2021.[5]October 27, 2021, GitHub released the GitHub Copilot Neovim plugin as a public repository.[6]GitHub announced Copilot's availability for the Visual Studio 2022 IDE on March 29, 2022.[7]On June 21, 2022, GitHub announced that Copilot was out of "technical preview", and is available as a subscription-based service for individual developers.[8]
GitHub Copilot is the evolution of the "Bing Code Search" plugin for Visual Studio 2013, which was a Microsoft Research project released in February 2014.[9]This plugin integrated with various sources, including MSDN and Stack Overflow, to provide high-quality contextually relevant code snippets in response to natural language queries.[10]
When provided with a programming problem innatural language, Copilot is capable of generating solution code.[11]It is also able to describe input code inEnglishand translate code between programming languages.[11]
According to its website, GitHub Copilot includes assistive features for programmers, such as the conversion ofcode commentsto runnable code, and autocomplete for chunks of code, repetitive sections of code, and entiremethodsand/orfunctions.[2][12]GitHub reports that Copilot's autocomplete feature is accurate roughly half of the time; with some Python function header code, for example, Copilot correctly autocompleted the rest of the function body code 43% of the time on the first try and 57% of the time after ten attempts.[2]
GitHub states that Copilot's features allow programmers to navigate unfamiliar codingframeworksand languages by reducing the amount of time users spend readingdocumentation.[2]
GitHub Copilot was initially powered by theOpenAI Codex,[13]which is a modified, production version ofGPT-3.[14]The Codex model is additionally trained on gigabytes of source code in a dozen programming languages. Copilot's OpenAI Codex was trained on a selection of the English language, public GitHub repositories, and other publicly available source code.[2]This includes a filtered dataset of 159gigabytesof Python code sourced from 54 million public GitHub repositories.[15]OpenAI's GPT-3 is licensed exclusively toMicrosoft, GitHub'sparent company.[16]
In November 2023, Copilot Chat was updated to use OpenAI'sGPT-4model.[17]In 2024, Copilot began allowing users to choose between different large language models, such asGPT-4oorClaude 3.5.[3]
Since Copilot's release, there have been concerns with its security and educational impact, as well as licensing controversy surrounding the code it produces.[18][11][19]
While GitHub CEONat Friedmanstated in June 2021 that "training ML systems on public data isfair use",[20]aclass-action lawsuitfiled in November 2022 called this "pure speculation", asserting that "no Court has considered the question of
whether 'training ML systems on public data is fair use.'"[21]The lawsuit from Joseph Saveri Law Firm, LLP challenges the legality of Copilot on several claims, ranging from breach of contract with GitHub's users, to breach of privacy under theCCPAfor sharingPII.[22][21]
GitHub admits that a small proportion of the tool's output may be copied verbatim, which has led to fears that the output code is insufficiently transformative to be classified as fair use and may infringe on the copyright of the original owner.[18]In June 2022, theSoftware Freedom Conservancyannounced it would end all uses of GitHub in its own projects,[23]accusing Copilot of ignoringcode licensesused in training data.[24]In a customer-support message, GitHub stated that "training machine learning models on publicly available data is considered fair use across the machine learning community",[21]but the class action lawsuit called this "false" and additionally noted that "regardless of this concept's level of acceptance in 'the machine learning community,' under Federal law, it is illegal".[21]
On July 28, 2021, theFree Software Foundation(FSF) published a funded call forwhite paperson philosophical and legal questions around Copilot.[25]Donald Robertson, the Licensing and Compliance Manager of the FSF, stated that "Copilot raises many [...] questions which require deeper examination."[25]On February 24, 2022, the FSF announced they had received 22 papers on the subject and using an anonymous review process chose 5 papers to highlight.[26]
The Copilot service iscloud-basedand requires continuous communication with the GitHub Copilot servers.[27]This opaque architecture has fueled concerns overtelemetryand data mining of individual keystrokes.[28][29]
In late 2022 GitHub Copilot has been accused of emittingQuakegame source code, with no author attribution or license.[30]
|
https://en.wikipedia.org/wiki/GitHub_Copilot
|
Language-oriented programming(LOP)[1]is a software-development paradigm where "language" is a software building block with the same status as objects, modules and components,[2]and rather than solving problems ingeneral-purpose programming languages, the programmer creates one or moredomain-specific languages(DSLs) for the problem first, and solves the problem in those languages. Language-oriented programming was first described in detail in Martin Ward's 1994 paperLanguage Oriented Programming.[1]
The concept of language-oriented programming takes the approach to capture requirements in the user's terms, and then to try to create an implementation language asisomorphicas possible to the user's descriptions, so that the mapping between requirements and implementation is as direct as possible. A measure of the closeness of this isomorphism is the "redundancy" of the language, defined as the number of editing operations needed to implement a stand-alone change in requirements. It is not assumeda-prioriwhat is the best language for implementing the new language. Rather, the developer can choose among options created by analysis of the information flows — what information is acquired, what its structure is, when it is acquired, from whom, and what is done with it.[3]
TheRacket programming languageandRascalMPLwere designed to support language-oriented programming from the ground up.[2]Otherlanguage workbench[4]tools such asJetBrains MPS,Kermeta, orXtextprovide the tools to design and implement DSLs and language-oriented programming.[5]
|
https://en.wikipedia.org/wiki/Language-oriented_programming
|
Semantic translationis the process of usingsemanticinformation to aid in the translation of data in one representation ordata modelto another representation or data model.[1]Semantic translation takes advantage of semantics that associate meaning with individualdata elementsin onedictionaryto create an equivalent meaning in a second system.
An example of semantic translation is the conversion ofXMLdata from one data model to a second data model using formalontologiesfor each system such as theWeb Ontology Language(OWL). This is frequently required byintelligent agentsthat wish to perform searches on remote computer systems that use different data models to store their data elements. The process of allowing a single user to search multiple systems with a single search request is also known asfederated search.
Semantic translation should be differentiated fromdata mappingtools that do simple one-to-one translation of data from one system to another without actually associating meaning with each data element.
Semantic translation requires that data elements in the source and destination systems have "semantic mappings" to a central registry or registries of data elements. The simplest mapping is of course where there is equivalence.
There are three types ofSemantic equivalence:
Semantic translation is very difficult if the terms in a particular data model do not have direct one-to-one mappings to data elements in a foreign data model. In that situation, an alternative approach must be used to find mappings from the original data to the foreign data elements. This problem can be alleviated by centralized metadata registries that use the ISO-11179 standards such as theNational Information Exchange Model(NIEM).
|
https://en.wikipedia.org/wiki/Semantic_translation
|
Afourth-generation programming language(4GL) is ahigh-levelcomputerprogramming languagethat belongs to a class of languages envisioned as an advancement uponthird-generation programming languages(3GL). Each of theprogramming language generationsaims to provide a higher level ofabstractionof the internalcomputer hardwaredetails, making the language moreprogrammer-friendly, powerful, and versatile. While the definition of 4GL has changed over time, it can be typified by operating more with large collections of information at once rather than focusing on justbitsandbytes. Languages claimed to be 4GL may include support fordatabasemanagement,report generation,mathematical optimization,graphical user interface(GUI)development, orweb development. Some researchers state that 4GLs are a subset ofdomain-specific languages.[1][2]
The concept of 4GL was developed from the 1970s through the 1990s, overlapping most of the development of 3GL, with 4GLs identified as "non-procedural" or "program-generating" languages, contrasted with 3GLs being algorithmic or procedural languages. While 3GLs likeC,C++,C#,Java, andJavaScriptremain popular for a wide variety of uses, 4GLs as originally defined found uses focused on databases, reports, and websites.[3]Some advanced 3GLs likePython,Ruby, andPerlcombine some 4GL abilities within a general-purpose 3GL environment,[4]andlibrarieswith 4GL-like features have been developed as add-ons for most popular 3GLs, producing languages that are a mix of 3GL and 4GL, blurring the distinction.[5]
In the 1980s and 1990s, there were efforts to developfifth-generation programming languages(5GL).
Though used earlier in papers and discussions, the term 4GL was first used formally byJames Martinin his 1981 bookApplication Development Without Programmers[6]to refer to non-procedural, high-levelspecification languages. In some primitive way, early 4GLs were included in theInformaticsMARK-IV(1967) product andSperry'sMAPPER(1969 internal use, 1979 release).
The motivations for the '4GL' inception and continued interest are several. The term can apply to a large set of software products. It can also apply to an approach that looks for greatersemanticproperties and implementation power. Just as the 3GL offered greater power to the programmer, so too did the 4GL open up the development environment to a wider population.
The early input scheme for the 4GL supported entry of data within the 72-character limit of thepunched card(8 bytes used for sequencing) where a card's tag would identify the type or function. With judicious use of a few cards, the4GL deckcould offer a wide variety of processing and reporting capability whereas the equivalent functionality coded in a3GLcould subsume, perhaps, a whole box or more ofcards.[7]
The 72-character formatcontinued for a whileas hardware progressed to larger memory and terminal interfaces. Even with its limitations, this approach supported highly sophisticated applications.
As interfaces improved and allowed longer statement lengths and grammar-driven input handling, greater power ensued. An example of this is described on theNomadpage.
The development of the 4GL was influenced by several factors, with the hardware and operating system constraints having a large weight. When the 4GL was first introduced, a disparate mix of hardware and operating systems mandated custom application development support that was specific to the system in order to ensure sales. One example is theMAPPERsystem developed bySperry. Though it has roots back to the beginning, the system has proven successful in many applications and has been ported to modern platforms. The latest variant is embedded in the BIS[8]offering ofUnisys.MARK-IVis now known as VISION:BUILDER and is offered byComputer Associates.
The Santa Fe railroadusedMAPPERto develop a system in a project that was an early example of 4GL,rapid prototyping, andprogramming by users.[9]The idea was that it was easier to teach railroad experts to useMAPPERthan to teach programmers the "intricacies of railroad operations".[10]
One of the early (and portable) languages that had 4GL properties wasRAMISdeveloped by Gerald C. Cohen atMathematica, a mathematical software company. Cohen left Mathematica and founded Information Builders to create a similar reporting-oriented 4GL, calledFOCUS.
Later 4GL types are tied to a database system and are far different from the earlier types in their use of techniques and resources that have resulted from the general improvement of computing with time.
An interesting twist to the 4GL scene is realization that graphical interfaces and therelated reasoningdone by the user form a 'language' that is poorly understood.
A number of different types of 4GLs exist:
Some 4GLs have integrated tools that allow for the easy specification of all the required information:
In the twenty-first century, 4GL systems have emerged as"low code" environments or platformsfor the problem of rapid application development in short periods of time. Vendors often provide sample systems such as CRM, contract management, bug tracking from which development can occur with little programming.[11]
Extract data from files or database to create reports in a wide range of formats is done by the report generator tools.
Source:[12][13]
|
https://en.wikipedia.org/wiki/Fourth-generation_programming_language
|
Alow-code development platform(LCDP) provides a development environment used to createapplication software, generally through agraphical user interface(as opposed to only writing code, though some coding is possible and may be required). A low-coded platform may produce entirely operational applications, or require additional coding for specific situations. Low-code development platforms are typically on ahigh abstraction level, and can reduce the amount of traditional time spent, enabling accelerated delivery of business applications. A common benefit is that a wider range of people can contribute to the application's development, not only those with coding skills, but good governance is needed to be able to adhere to common rules and regulations. LCDPs can also lower the initial cost of setup, training, deployment, and maintenance.[1]
Low-code development platforms trace their roots back tofourth-generation programming languageand therapid application developmenttools of the 1990s and early 2000s. Similar to these predecessor development environments, LCDPs are based on the principles ofmodel-driven architecture,automatic code generation, andvisual programming.[2]The concept ofend-user developmentalso existed previously, although LCDPs brought some new ways of approaching this development. The low-code development platform market traces its origins back to 2011.[3]The specific name "low-code" was not put forward until 9 June, 2014,[1]when it was used by the industry analystForrester Research. Along withno-code development platforms, low-code was described as "extraordinarily disruptive" inForbesmagazine in 2017.[4]
As a result of themicrocomputerrevolution, businesses have deployed computers widely across their employee bases, enabling widespread automation of business processes usingsoftware.[5]The need for software automation and new applications for business processes places demands onsoftware developersto create custom applications in volume, tailoring them to organizations' unique needs.[6]Low-code development platforms have been developed as a means to allow for quick creation and use of working applications that can address the specific process and data needs of the organization.[7]
Research firmForresterestimated in 2016 that the total market for low-code development platforms would grow to $15.5 billion by 2020.[8]Segments in the market include database, request handling, mobile, process, and general purpose low-code platforms.[9]
Low-code development's market growth can be attributed to its flexibility and ease.[10]Low-code development platforms are shifting their focus toward general purpose of applications, with the ability to add in custom code when needed or desired.[3]
Mobile accessibility is one of the driving factors of using low-code development platforms.[6]Instead of developers having to spend time creating multi-device software,low-codepackages typically come with that feature as standard.[6]
Because they require less coding knowledge, nearly anyone in a software development environment can learn to use a low-code development platform.[11]Features likedrag and dropinterfaces help users visualize and build the application[8]
Concerns over low-code development platform security and compliance are growing, especially for apps that use consumer data. There can be concerns over the security of apps built so quickly and possible lack of due governance leading tocomplianceissues.[10]However, low-code apps do also fuel security innovations. With continuous app development in mind, it becomes easier to create secure data workflows.
Some IT professionals question whether low-code development platforms are suitable for large-scale and mission-critical enterprise applications.[12]Others have questioned whether these platforms actually make development cheaper or easier.[13]Additionally, someCIOshave expressed concern that adopting low-code development platforms internally could lead to an increase in unsupported applications built byshadow IT.[14]
|
https://en.wikipedia.org/wiki/Low-code_development_platform
|
Emergent Codingis adecentralizedsoftware developmentparadigm employing a type ofsoftware componentthat cannot be copied or reused with the objective of achieving both workable developer specialization, and a practical software components market.[1][2]
Emergent Coding is adecentralizedsoftware developmentparadigm employing a new type of software component that cannot be copied or reused.[1]The method ensures developers can safely list their software components for public sale without endangering prospects forrepeat business,[3]a feature essential for both workable developer specialization, and realizingDouglas McIlroy's1968 vision of a software components market.[2]
The change is a reversal of integration responsibility such that instead of fetching a component in a traditional sense, a developer provides a project construction-site to the supplier with that supplier now integrating their component into the project. The reversal switches the view of components from alibrary-of-routines to acatalogue-of-design-services.[1]
The reversal permits this new component type to properly scale as the construction-site can be readily partitioned to engage sub-contractors allowing components to be fielded as an assemblage of smaller ones which do likewise. Scaling down allows small components to absorb the role of the traditionalcompiler, removing itscentralismfrom software development while scaling up results indomain-specificcomponents for expressing projectrequirements.[1]
Douglas McIlroyat aNATOconference in 1968, observed: "The Software Industry is Not Industrialized"[2]and proposed a software components market with component "distribution bycommunication link” whereby “even very small components might be profitably marketed". McIlroy imagined a "Sears-Roebuck" style catalogue "to have for [his] own were [he] purchasing components." McIlroy's proposal did not address how viable developer specialization might come about if we are to turn our "crofters" into "industrialists". Specifically, while it is easy for a developer to specialize, it is virtually impossible for them to build a viable business as a specialist.
In late 1994, Noel Lovisa proposed reversing the integration responsibility as a means of shielding supplierintellectual property, thereby preserving prospects for repeat business, and establishing a workable basis for developer specialization. Lovisa founded Code Valley Corp Pty Ltd[4]in May 2000 to create and field a practical software components market based on the principal, releasing a white paper in 2016,[1]and conducting trials of a centralized software components market that same year. In June 2018, Lovisa delivered a keynote address atICSE 2018inGothenburg,Sweden[5][6]which, being the 50th anniversary of the 1968NATO Software Engineering Conference,[7][8][2]was attended by McIlroy and other industry leaders. In September 2023, McIlroy extended an invitation to Lovisa to present Emergent Coding atDartmouth College, New Hampshire.[9]
In late 2023, Code Valley began trials of a decentralized and fully non-custodial software components market featuring a customIntegrated Development Environment(IDE), over 5000 software components occupying 4 levels ofabstraction(Behaviour, Systems, Data, Byte), a Distributed Fault Tracing System (DFT), apeer-to-peerelectronic cashpayment system, and an interactive catalogue of component prices, data sheets, contract specifications, and reference designs.
The implementation, itself built with emergent coding, is expected to publicly launch in 2025.
Component Based Software development begins with drafting an expression containing a series of contract statements for engaging the desired component suppliers assisted by the contract specifications published in the component catalogue. As all components in the catalogue have a listed price, the total cost of the project can be reliably determined from the expression before committing to construction. When the expression is deemed in order and the costs acceptable, the project can be built. During the build process, the IDEparsesthe expression and engages the contractors by forwarding contracts and payments to each. Thesecontractorsreceive a portion of their requirements via their contract terms directly with the balance determined via collaboration between contractors as authorized by the contract terms. Each component contractor concludes their contract by returning a fragment of code and data that when concatenated, forms the resultant project binary.[1]
Each contractor engaged, verifies payment and allocates a job against the contract, being sure to return the job number to the client so they may forward communication authorizations for the collaborations between contractors as detailed in the expression. These contractors receive a portion of their requirements via their contract terms directly with the balance determined via collaboration with peers as authorized by the contract terms. Once in possession of project requirements, eachsub-contractssmaller components as directed by theirspecial knowledge, which do likewise. Each component subcontractor returns a fragment of code and data that when concatenated, form single code and data fragments for receipt to the client concluding the contract.[1]
Leafcontractors in the project contractingtreesimilarly verify payment and allocate a job against the contract, being sure to return the job number to the client so they may forward communication authorizations for the collaborations between contractors as detailed by their special knowledge. As before, these contractors receive a portion of their requirements via their contract terms directly with the balance determined via collaboration with peers as authorized by the contract terms, however, as a result of gaining sufficient understanding of their design-time context, they render their code and data fragments directly for receipt to the client concluding the contract.[1]
|
https://en.wikipedia.org/wiki/Emergent_Coding
|
In amultitaskingcomputersystem,processesmay occupy a variety ofstates. These distinct states may not be recognized as such by theoperating systemkernel. However, they are a useful abstraction for the understanding of processes.
The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" onmain memory.
When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. Admission will be approved or delayed by a long-term, or admission,scheduler. Typically in mostdesktop computersystems, this admission will be approved automatically. However, forreal-time operating systemsthis admission may be delayed. In a realtime system, admitting too many processes to the "ready" state may lead to oversaturation andovercontentionof the system's resources, leading to an inability to meet process deadlines.
A "ready" or "waiting" process has been loaded intomain memoryand is awaiting execution on aCPU(to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many "ready" processes at any one point of the system's execution—for example, in a one-processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.
Aready queueorrun queueis used incomputer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in aqueuefor "ready" processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue.
A process moves into the running state when it is chosen for execution. The process's instructions are executed by one of the CPUs (or cores) of the system. There is at most one running process per CPU or core. A process can run in either of the two modes, namelykernel modeoruser mode.[1][2]
A process transitions to ablockedstate when it cannot carry on without an external change in state or event occurring. For example, a process may block on a call to an I/O device such as a printer, if the printer is not available. Processes also commonly block when they require user input, or require access to acritical sectionwhich must be executed atomically. Such critical sections are protected using a synchronization object such as a semaphore or mutex.
A process may beterminated, either from the "running" state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the "terminated" state. The underlying program is no longer executing, but the process remains in theprocess tableas azombie processuntil its parent process calls thewaitsystem callto read itsexit status, at which point the process is removed from the process table, finally ending the process's lifetime. If the parent fails to callwait, this continues to consume the process table entry (concretely theprocess identifieror PID), and causes aresource leak.
Two additional states are available for processes in systems that supportvirtual memory. In both of these states, processes are "stored" on secondary memory (typically ahard disk).
(Also calledsuspended and waiting.) In systems that support virtual memory, a process may be swapped out, that is, removed from main memory and placed on external storage by the scheduler. From here the process may be swapped back into the waiting state.
(Also calledsuspended and blocked.) Processes that are blocked may also be swapped out. In this event the process is both swapped out and blocked, and may be swapped back in again under the same circumstances as a swapped out and waiting process (although in this case, the process will move to the blocked state, and may still be waiting for a resource to become available).
|
https://en.wikipedia.org/wiki/Process_state
|
Incomputing, acontext switchis the process of storing the state of aprocessorthread, so that it can be restored and resumeexecutionat a later point, and then restoring a different, previously saved, state.[1]This allows multiple processes to share a singlecentral processing unit(CPU), and is an essential feature of a multiprogramming ormultitasking operating system. In a traditional CPU, each process – a program in execution – uses the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously.[2]For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and loads the following running process is called a context switch.
The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of aninterrupt, such as when a task needs to accessdisk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move betweenuser modeandkernel modetasks. The process of context switching can have a negative impact on system performance.[3]: 28
Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration – saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes).
For example, in theLinux kernel, context switching involves loading the correspondingprocess control block(PCB) stored in the PCB table in the kernel stack to retrieve information about the state of the new process. CPU state information including the registers,stack pointer, andprogram counteras well as memory management information likesegmentation tablesandpage tables(unless the old process shares the memory with the new) are loaded from the PCB for the new process. To avoid incorrect address translation in the case of the previous and current processes using different memory, thetranslation lookaside buffer(TLB) must be flushed. This negatively affects performance because every memory reference to the TLB will be a miss because it is empty after most context switches.[4][5]
Furthermore, analogous context switching happens betweenuser threads, notablygreen threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching betweengoroutinesinGo, a context switch is equivalent to acoroutineyield, which is only marginally more expensive than asubroutinecall.
There are three potential triggers for a context switch:
Most commonly, within someschedulingscheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for anI/Oorsynchronizationoperation to complete. On apre-emptive multitaskingsystem, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds itstime slice. This interrupt ensures that the scheduler will gain control to perform a context switch.
Modern architectures areinterruptdriven. This means that if the CPU requests data from a disk, for example, it does not need tobusy-waituntil the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can beinterrupted(by a hardware in this case, which sends interrupt request toPIC) and presented with the read. For interrupts, a program called aninterrupt handleris installed, and it is the interrupt handler that handles the interrupt from the disk.
When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. Thekerneldoes not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state.
When the system transitions betweenuser modeandkernel mode, a context switch is not necessary; amode transitionis not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.
The state of the currently executing process must be saved so it can be restored when rescheduled for execution.
The process state includes all the registers that the process may be using, especially theprogram counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called aprocess control block(PCB) orswitchframe.
The PCB might be stored on a per-processstackin kernel memory (as opposed to the user-modecall stack), or there may be some specific operating system-defined data structure for this information. Ahandleto the PCB is added to a queue of processes that are ready to run, often called theready queue.
Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which process is chosen from the ready queue (i.e., it may be apriority queue).
The details vary depending on the architecture and operating system, but these are common scenarios.
Considering a general arithmetic addition operation A = B+1. The instruction is stored in theinstruction registerand theprogram counteris incremented. A and B are read from memory and are stored in registers R1, R2 respectively. In this case, B+1 is calculated and written in R1 as the final answer. This operation as there are sequential reads and writes and there's no waits forfunction callsused, hence no context switch/wait takes place in this case.
Suppose a process A is running and a timer interrupt occurs. The user registers — program counter, stack pointer, and status register — of process A are then implicitly saved by the CPU onto the kernel stack of A. Then, the hardware switches to kernel mode and jumps into interrupt handler for the operating system to take over. Then the operating system calls theswitch()routine to first save the general-purpose user registers of A onto A's kernel stack, then it saves A's current kernel register values into the PCB of A, restores kernel registers from the PCB of process B, and switches context, that is, changes kernel stack pointer to point to the kernel stack of process B. The operating system then returns from interrupt. The hardware then loads user registers from B's kernel stack, switches to user mode, and starts running process B from B's program counter.[6]
Context switching itself has a cost in performance, due to running thetask scheduler, TLB flushes, and indirectly due to sharing theCPU cachebetween multiple tasks.[7]Switching between threads of a single process can be faster than between two separate processes because threads share the samevirtual memorymaps, so a TLB flush is not necessary.[8]
The time to switch between two separate processes is called theprocess switching latency. The time to switch between two threads of the same process is called thethread switching latency. The time from when a hardware interrupt is generated to when the interrupt is serviced is called theinterrupt latency.
Switching between two processes in asingle address space operating systemcan be faster than switching between two processes in an operating system with private per-process address spaces.[9]
Context switching can be performed primarily by software or hardware. Some processors, like theIntel 80386and its successors,[10]have hardware support for context switches, by making use of a special data segment designated thetask state segment(TSS). A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS descriptor in theglobal descriptor table. It can occur implicitly when an interrupt or exception is triggered if there is atask gatein theinterrupt descriptor table(IDT). When a task switch occurs, the CPU can automatically load the new state from the TSS.
As with other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, includingWindowsandLinux,[11]do not use this feature. This is mainly due to two reasons:
|
https://en.wikipedia.org/wiki/Context_switch
|
Inartificial intelligence,action description language(ADL) is anautomated planning and schedulingsystem in particular for robots. It is considered an advancement ofSTRIPS. Edwin Pednault (a specialist in the field of data abstraction and modelling who has been an IBM Research Staff Member in the Data Abstraction Research Group since 1996[1]) proposed this language in 1987. It is an example of anaction language.
Pednault observed that theexpressive powerof STRIPS was susceptible to being improved by allowing the effects of an operator to be conditional. This is the main idea of ADL-A, which is roughly the propositional fragment of the ADL proposed by Pednault,[2]with ADL-B an extension of -A. In the -B extension, actions can be described with indirect effects by the introduction of a new kind of propositions: ”static laws". A third variation of ADL is ADL-C which is similar to -B, in the sense that its propositions can be classified into static and dynamic laws, but with some more particularities.[3]
The sense of a planning language is to represent certain conditions in the environment and, based on these, automatically generate a chain of actions which lead to a desired goal. A goal is a certain partially specified condition. Before an action can be executed its preconditions must be fulfilled; after the execution the action yields effects, by which the environment changes. The environment is described by means of certain predicates, which are either fulfilled or not.
Contrary to STRIPS, the principle of theopen worldapplies with ADL: everything not occurring in the conditions is unknown (Instead of being assumed false). In addition, whereas in STRIPS only positiveliteralsandconjunctionsare permitted, ADL allows negative literals anddisjunctionsas well.
An ADL schema consists of an action name, an optional parameter list and four optional groups of clauses labeled Precond, Add, Delete and Update.
The Precond group is a list of formulae that define the preconditions for the execution of an action. If the set is empty the value "TRUE" is inserted into the group and the preconditions are always evaluated as holding conditions.
The Add and Delete conditions are specified by the Add and Delete groups, respectively. Each group consists of a set of clauses of the forms shown in the left-hand column of the figure 1:
The Update groups are used to specify the update conditions to change the values of function symbols. An Update group consists of a set of clauses of the forms shown in the left column of the figure 2:
The formal semantic of ADL is defined by four constraints.
⇒ Actions may not change the set of objects that exist in the world; this means that for every action α and every current-state/next-state pair(s,t) ∈a, it must be the case that the domain of t should be equal to the domainofs.
⇒ Actions in ADL must be deterministic. If(s,t1)and(s,t2)are current-state/next-state pairs of action ∃, then it must be the case thatt1=t2.
⇒ The functions introduced above must be representable as first-order formulas. For everyn-ary relation symbolR, there must exist a formulaΦaR(x1, ... ,xn)with free variablesx2, ...,xnsuch thatfaR(s)is given by:
Consequently,F(n1, ...,xn) =ywill be true after performing action |= if and only ifΦaR(x1, ...,xn,y)was true beforehand. Note that this representability requirement relies on the first constraint (domain offshould be equal to domainofs).
⇒ The set of states in which an action is executable must also be representable as a formula. For every actionαthat can be represented in ADL, there must exist a formulaΠawith the property thats|= Πaif and only if there is some statetfor which(s,t) ∈α(i.e. action α is executable instates)
In terms of computational efficiency, ADL can be located between STRIPS and the Situation Calculus.[4]Any ADL problem can be translated into a STRIPS instance – however, existing compilation techniques are worst-case exponential.[5]This worst case cannot be improved if we are willing to preserve the length of plans polynomially,[6]and thus ADL is strictly more brief than STRIPS.
ADL planning is still a PSPACE-complete problem. Most of the algorithms polynomial space even if the preconditions and effects are complex formulae.[7]
Most of the top-performing approaches to classical planning internally utilize a STRIPS like representation. In fact most of the planners (FF, LPG, Fast-Downward, SGPLAN5 and LAMA) first translate the ADL instance into one that is essentially a STRIPS one (without conditional or quantified effects or goals).
The expressiveness of the STRIPS language is constrained by the types of transformations on sets of formulas that can be described in the language. Transformations on sets of formulas using STRIPS operators are accomplished by removing some formulas from the set to be transformed and adding new additional formulas. For a given STRIPS operator the formulas to be added and deleted are fixed for all sets of formulas to be transformed. Consequently, STRIPS operators cannot adequately model actions whose effects depend on the situations in which they are performed. Consider a rocket which is going to be fired for a certain amount of time. The trajectory may vary not only because of the burn duration but also because of the velocity, mass and orientation of the rocket. It cannot be modelled by means of a STRIPS operator because the formulas that would have to be added and deleted would depend on the set of formulas to be transformed.[8]
Although an efficient reasoning is possible when the STRIPS language is being used it is generally recognized that the expressiveness of STRIPS is not suitable for modeling actions in many real world applications. This inadequacy motivated the development of the ADL language.[9][10]ADL expressiveness and complexity lies between the STRIPS language and the situation calculus. Its expressive power is sufficient to allow the rocket example described above to be represented yet, at the same time, it is restrictive enough to allow efficient reasoning algorithms to be developed.
As an example in a more complex version of the blocks world: It could be that block A is twice as big as blocks B and C, so the action xMoveOnto(B,A) might only have the effect of negating Clear(A) if On(A,C) is already true, or creating the conditional effect depending on the size of the blocks. This kind of conditional effects would be hard to express in STRIPS notation without the conditional effects.
Consider the problem of air freight transport, where certain goods must be transported from an airport to another airport by plane and where airplanes need to be loaded and unloaded.
The necessary actions would beloading,unloadingandflying; over the
descriptors one could expressIn(c, p)andAt(x, A)whether a freightcis in an airplanepand whether an objectxis at an airportA.
The actions could be defined then as follows:
|
https://en.wikipedia.org/wiki/Action_description_language
|
Theactor modelincomputer scienceis amathematical modelofconcurrent computationthat treats anactoras the basic building block of concurrent computation. In response to amessageit receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their ownprivate state, but can only affect each other indirectly through messaging (removing the need forlock-based synchronization).
The actor model originated in 1973.[1]It has been used both as a framework for atheoretical understandingofcomputationand as the theoretical basis for severalpractical implementationsofconcurrent systems. The relationship of the model to other work is discussed inactor model and process calculi.
According toCarl Hewitt, unlike previous models of computation, the actor model was inspired byphysics, includinggeneral relativityandquantum mechanics.[citation needed]It was also influenced by the programming languagesLisp,Simula, early versions ofSmalltalk,capability-based systems, andpacket switching.
Its development was "motivated by the prospect of highlyparallel computingmachines consisting of dozens, hundreds, or even thousands of independentmicroprocessors, each with its own local memory andcommunications processor, communicating via a high-performancecommunications network."[2]Since that time, the advent of massive concurrency throughmulti-coreandmanycorecomputer architectureshas revived interest in the actor model.
Following Hewitt, Bishop, and Steiger's 1973 publication,Irene Greifdeveloped anoperational semanticsfor the actor model as part of herdoctoralresearch.[3]Two years later,Henry Bakerand Hewitt published a set of axiomatic laws for actor systems.[4][5]Other major milestones includeWilliam Clinger's1981 dissertation introducing adenotational semanticsbased onpower domains[2]andGul Agha's 1985 dissertation which further developed a transition-based semantic model complementary to Clinger's.[6]This resulted in the full development ofactor model theory.
Major softwareimplementationwork was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn,Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group atMassachusetts Institute of Technology(MIT). Research groups led by Chuck Seitz atCalifornia Institute of Technology(Caltech) andBill Dallyat MIT constructed computer architectures that further developed the message passing in the model. SeeActor model implementation.
Research on the actor model has been carried out atCalifornia Institute of Technology,Kyoto UniversityTokoro Laboratory,Microelectronics and Computer Technology Corporation(MCC),MIT Artificial Intelligence Laboratory,SRI,Stanford University,University of Illinois at Urbana–Champaign,[7]Pierre and Marie Curie University(University of Paris 6),University of Pisa,University of TokyoYonezawa Laboratory,Centrum Wiskunde & Informatica(CWI) and elsewhere.
The actor model adopts the philosophy thateverything is an actor. This is similar to theeverything is an objectphilosophy used by someobject-oriented programminglanguages.
An actor is a computational entity that, in response to a message it receives, can concurrently:
There is no assumed sequence to the above actions and they could be carried out in parallel.
Decoupling the sender from communications sent was a fundamental advance of the actor model enablingasynchronous communicationand control structures as patterns ofpassing messages.[8]
Recipients of messages are identified by address, sometimes called "mailing address". Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created.
The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronousmessage passingwith no restriction on message arrival order.
Over the years, several different formal systems have been developed which permit reasoning about systems in the actor model. These include:
There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (SeeAttempts to relate actor semantics to algebra and linear logic):
The actor model can be used as a framework for modeling, understanding, and reasoning about a wide range ofconcurrent systems.[15]For example:
The actor model is about the semantics ofmessage passing.
Arguably, the first concurrent programs wereinterrupt handlers. During the course of its normal operation a computer needed to be able to receive information from outside (characters from a keyboard, packets from a network,etc). So when the information arrived the execution of the computer wasinterruptedand special code (called an interrupt handler) was called to put the information in adata bufferwhere it could be subsequently retrieved.
In the early 1960s, interrupts began to be used to simulate the concurrent execution of several programs on one processor.[17]Having concurrency withshared memorygave rise to the problem ofconcurrency control. Originally, this problem was conceived as being one ofmutual exclusionon a single computer.Edsger Dijkstradevelopedsemaphoresand later, between 1971 and 1973,[18]Tony Hoare[19]andPer Brinch Hansen[20]developedmonitorsto solve the mutual exclusion problem. However, neither of these solutions provided a programming language construct that encapsulated access to shared resources. This encapsulation was later accomplished by the serializer construct ([Hewitt and Atkinson 1977, 1979] and [Atkinson 1980]).
The first models of computation (e.g.,Turing machines, Post productions, thelambda calculus,etc.) were based on mathematics and made use of a global state to represent a computationalstep(later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] seeEvent orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued inautomata theoryforfinite-state machinesand push downstack machines, including theirnondeterministicversions. Such nondeterministic automata have the property ofbounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts.
Edsger Dijkstrafurther developed the nondeterministic global state approach. Dijkstra's model gave rise to a controversy concerningunbounded nondeterminism(also calledunbounded indeterminacy), a property ofconcurrencyby which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resourceswhile still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra's model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism.
Hewitt argued otherwise: there is no bound that can be placed on how long it takes a computational circuit called anarbiterto settle (seemetastability (electronics)).[21]Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with respect to input from outside,e.g., keyboard input, disk access, network input,etc.So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states.
The actor model features unbounded nondeterminism which was captured in a mathematical model byWill Clingerusingdomain theory.[2]In the actor model, there is no global state.[dubious–discuss]
Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the "ether" or the "environment". Also, messages in the actor model are simply sent (likepacketsinIP); there is no requirement for a synchronous handshake with the recipient.
A natural development of the actor model was to allow addresses in messages. Influenced bypacket switched networks[1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication.
For example, an actor might need to send a message to a recipient actor from which it later expects to receive a response, but the response will actually be handled by a third actor component that has been configured to receive and handle the response (for example, a different actor implementing theobserver pattern). The original actor could accomplish this by sending a communication that includes the message it wishes to send, along with the address of the third actor that will handle the response. This third actor that will handle the response is called theresumption(sometimes also called acontinuationorstack frame). When the recipient actor is ready to send a response, it sends the response message to theresumptionactor address that was included in the original communication.
So, the ability of actors to create new actors with which they can exchange communications, along with the ability to include the addresses of other actors in messages, gives actors the ability to create and participate in arbitrarily variable topological relationships with one another, much as the objects in Simula and other object-oriented languages may also be relationally composed into variable topologies of message-exchanging objects.
As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained inactor model theory.
Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved inFIFOorder. So if an actorXsent a messageM1to an actorY, and laterXsent another messageM2toY, there is no requirement thatM1arrives atYbeforeM2.
In this respect the actor model mirrorspacket switchingsystems which do not guarantee that packets must be received in the order sent. Not providing the order of delivery guarantee allows packet switching to buffer packets, use multiple paths to send packets, resend damaged packets, and to provide other optimizations.
For more example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a messageM1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another messageM2before it has finished processingM1. Just because an actor is allowed to pipeline the processing of messages does not mean that itmustpipeline the processing. Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining. Of course, it is possible to perform the pipeline optimization incorrectly in some implementations, in which case unexpected behavior may occur.
Another important characteristic of the actor model is locality.
Locality means that in processing a message, an actor can send messages only to addresses that it receives in the message, addresses that it already had before it received the message, and addresses for actors that it creates while processing the message. (But seeSynthesizing addresses of actors.)
Also locality means that there is no simultaneous change in multiple locations. In this way it differs from some other models of concurrency,e.g., thePetri netmodel in which tokens are simultaneously removed from multiple locations and placed in other locations.
The idea of composing actor systems into larger ones is an important aspect ofmodularitythat was developed in Gul Agha's doctoral dissertation,[6]developed later by Gul Agha, Ian Mason, Scott Smith, andCarolyn Talcott.[9]
A key innovation was the introduction ofbehaviorspecified as a mathematical function to express what an actor does when it processes a message, including specifying a new behavior to process the next message that arrives. Behaviors provided a mechanism to mathematically model the sharing in concurrency.
Behaviors also freed the actor model from implementation details,e.g., the Smalltalk-72 token stream interpreter. However, the efficient implementation of systems described by the actor model requireextensiveoptimization. SeeActor model implementationfor details.
Other concurrency systems (e.g.,process calculi) can be modeled in the actor model using atwo-phase commit protocol.[22]
There is aComputational Representation Theoremin the actor model for systems which are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed systemS{\displaystyle {\mathtt {S}}}is constructed from an initial behavior⊥S{\displaystyle \bot _{\mathtt {S}}}and a behavior-approximating functionprogressionS.{\displaystyle \mathbf {progression} _{\mathtt {S}}.}These obtain increasingly better approximations and construct a denotation (meaning) forS{\displaystyle {\mathtt {S}}}as follows [Hewitt 2008; Clinger 1981]:
In this way,S{\displaystyle {\mathtt {S}}}can be mathematically characterized in terms of all its possible behaviors (including those involving unbounded nondeterminism). AlthoughDenoteS{\displaystyle \mathbf {Denote} _{\mathtt {S}}}is not an implementation ofS{\displaystyle {\mathtt {S}}}, it can be used to prove a generalization of the Church-Turing-Rosser-Kleene thesis [Kleene 1943]:
A consequence of the above theorem is that a finite actor can nondeterministically respond with anuncountable[clarify]number of different outputs.
One of the key motivations for the development of the actor model was to understand and deal with the control structure issues that arose in development of thePlanner programming language.[citation needed]Once the actor model was initially defined, an important challenge was to understand the power of the model relative toRobert Kowalski's thesis that "computation can be subsumed by deduction". Hewitt argued that Kowalski's thesis turned out to be false for the concurrent computation in the actor model (seeIndeterminacy in concurrent computation).
Nevertheless, attempts were made to extendlogic programmingto concurrent computation. However, Hewitt and Agha [1991] claimed that the resulting systems were not deductive in the following sense: computational steps of the concurrent logic programming systems do not follow deductively from previous steps (seeIndeterminacy in concurrent computation). Recently, logic programming has been integrated into the actor model in a way that maintains logical semantics.[21]
Migration in the actor model is the ability of actors to change locations.E.g., in his dissertation, Aki Yonezawa modeled a post office that customer actors could enter, change locations within while operating, and exit. An actor that can migrate can be modeled by having a location actor that changes when the actor migrates. However the faithfulness of this modeling is controversial and the subject of research.[citation needed]
The security of actors can be protected in the following ways:
A delicate point in the actor model is the ability to synthesize the address of an actor. In some cases security can be used to prevent the synthesis of addresses (seeSecurity). However, if an actor address is simply a bit string then clearly it can be synthesized although it may be difficult or even infeasible to guess the address of an actor if the bit strings are long enough.SOAPuses aURLfor the address of an endpoint where an actor can be reached. Since aURLis a character string, it can clearly be synthesized although encryption can make it virtually impossible to guess.
Synthesizing the addresses of actors is usually modeled using mapping. The idea is to use an actor system to perform the mapping to the actual actor addresses. For example, on a computer the memory structure of the computer can be modeled as an actor system that does the mapping. In the case ofSOAPaddresses, it's modeling theDNSand the rest of theURLmapping.
Robin Milner's initial published work on concurrency[23]was also notable in that it was not based on composing sequential processes. His work differed from the actor model because it was based on a fixed number of processes of fixed topology communicating numbers and strings using synchronous communication. The originalcommunicating sequential processes(CSP) model[24]published byTony Hoarediffered from the actor model because it was based on the parallel composition of a fixed number of sequential processes connected in a fixed topology, and communicating using synchronous message-passing based on process names (seeActor model and process calculi history). Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner's work on thecalculus of communicating systems (CCS)and theπ-calculus.
These early models by Milner and Hoare both had the property of bounded nondeterminism. Modern, theoretical CSP ([Hoare 1985] and [Roscoe 2005]) explicitly provides unbounded nondeterminism.
Petri netsand their extensions (e.g., coloured Petri nets) are like actors in that they are based on asynchronous message passing and unbounded nondeterminism, while they are like early CSP in that they define fixed topologies of elementary processing steps (transitions) and message repositories (places).
The actor model has been influential on both theory development and practical software development.
The actor model has influenced the development of theπ-calculusand subsequentprocess calculi. In his Turing lecture, Robin Milner wrote:[25]
Now, the pure lambda-calculus is built with just two kinds of thing: terms and variables. Can we achieve the same economy for a process calculus? Carl Hewitt, with his actors model, responded to this challenge long ago; he declared that a value, an operator on values, and a process should all be the same kind of thing: an actor.
This goal impressed me, because it implies the homogeneity and completeness of expression ... But it was long before I could see how to attain the goal in terms of an algebraic calculus...
So, in the spirit of Hewitt, our first step is to demand that all things denoted by terms or accessed by names—values, registers, operators, processes, objects—are all of the same kind of thing; they should all be processes.
The actor model has had extensive influence on commercial practice. For example, Twitter has used actors for scalability.[26]Also, Microsoft has used the actor model in the development of its Asynchronous Agents Library.[27]There are multiple other actor libraries listed in the actor libraries and frameworks section below.
According to Hewitt [2006], the actor model addresses issues in computer and communications architecture,concurrent programming languages, andWeb servicesincluding the following:
Many of the ideas introduced in the actor model are now also finding application inmulti-agent systemsfor these same reasons [Hewitt 2006b 2007b]. The key difference is that agent systems (in most definitions) impose extra constraints upon the actors, typically requiring that they make use of commitments and goals.
A number of different programming languages employ the actor model or some variation of it. These languages include:
Actor libraries or frameworks have also been implemented to permit actor-style programming in languages that don't have actors built-in. Some of these frameworks are:
|
https://en.wikipedia.org/wiki/Actor_model
|
TheInternational Conference on Automated Planning and Scheduling(ICAPS) is a leading internationalacademic conferenceinautomated planning and schedulingheld annually for researchers and practitioners in planning and scheduling.[2][3][4]ICAPS is supported by theNational Science Foundation, the journalArtificial Intelligence, and other supporters.[5]
ICAPS conducts the International Planning Competition (IPC), a competition scheduled every few years that empirically evaluates state-of-the-art planning systems on a collection of benchmark problems.[6]ThePlanning Domain Definition Language(PDDL) was developed mainly to make the 1998/2000 International Planning Competition possible, and then evolved with each competition. PDDL is an attempt to standardize Artificial Intelligence (AI) planning languages.[7][8]PDDL was first developed byDrew McDermottand his colleagues in 1998, inspired bySTRIPS,ADL, and other sources.
The ICAPS conferences began in 2003 as a merge of two bi-annual conferences, the International Conference on Artificial Intelligence Planning and Scheduling (AIPS) and the European Conference on Planning (ECP).[1]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
This article about a computer conference is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/International_Conference_on_Automated_Planning_and_Scheduling
|
Inartificial intelligence,reactive planningdenotes a group of techniques foraction selectionbyautonomous agents. These techniques differ fromclassical planningin two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictableenvironments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often (but not always) exploitreactive plans, which are stored structures describing the agent's priorities and behaviour. The termreactive planninggoes back to at least 1988, and is synonymous with the more modern termdynamic planning.
There are several ways to represent a reactive plan. All require a basic representational unit and a means to compose these units into plans.
A condition action rule, or if-then rule, is a rule in the form:ifconditionthenaction. These rules are calledproductions. The meaning of the rule is as follows: if the condition holds, perform the action. The action can be either external (e.g., pick something up and move it), or internal (e.g., write a fact into the internal memory, or evaluate a new set of rules). Conditions are normally boolean and the action either can be performed, or not.
Production rules may be organized in relatively flat structures, but more often are organized into ahierarchyof some kind. For example,subsumption architectureconsists of layers of interconnectedbehaviors, each actually afinite-state machinewhich acts in response to an appropriate input. These layers are then organized into a simple stack, with higher layers subsuming the goals of the lower ones. Other systems may usetrees, or may include special mechanisms for changing which goal / rule subset is currently most important. Flat structures are relatively easy to build, but allow only for description of simple behavior, or require immensely complicated conditions to compensate for the lacking structure.
An important part of any distributedaction selectionalgorithms is a conflict resolution mechanism. This is a mechanism for resolving conflicts between actions proposed when more than one rules' condition holds in a given instant. The conflict can be solved for example by
Expert systems often use other simplerheuristicssuch asrecencyfor selecting rules, but it is difficult to guarantee good behavior in a large system with simple approaches.
Conflict resolution is only necessary for rules that want to take mutually exclusive actions (cf. Blumberg 1996).
Some limitations of this kind of reactive planning can be found in Brom (2005).
Finite-state machine(FSM) is model of behaviour of a system. FSMs are used widely in computer science. Modeling behaviour ofagentsis only one of their possible applications.
A typical FSM, when used for describing behaviour of an agent, consists of a set of states and transitions between these states. The transitions are actually condition action rules. In every instant, just one state of the FSM is active, and its transitions are evaluated. If a transition is taken it activates another state. That means, in general transitions are the rules in the following form:ifconditionthenactivate-new-state. But transitions can also connect to the 'self' state in some systems, to allow execution of transition actions without actually changing the state.
There are two ways of how to produce behaviour by a FSM. They depend on what is associated with the states by a designer --- they can be either 'acts', or scripts. An 'act' is an atomic action that should be performed by the agent if its FSM is the given state. This action is performed in every time step then. However, more often is the latter case. Here, every state is associated with a script, which describes a sequence of actions that the agent has to perform if its FSM is in a given state. If a transition activates a new state, the former script is simply interrupted, and the new one is started.
If a script is more complicated, it can be broken down to several scripts and a hierarchical FSM can be exploited. In such an automaton, every state can contain substates. Only the states at the atomic level are associated with a script (which is not complicated) or an atomic action.
Computationally, hierarchical FSMs are equivalent to FSMs. That means that each hierarchical FSM can be converted to a classical FSM. However, hierarchical approaches facilitate designs better.
See thepaperof Damian Isla (2005) for an example of ASM ofcomputer game bots, which uses hierarchical FSMs.
Both if-then rules and FSMs can be combined withfuzzy logic. The conditions, states and actions are no more boolean or "yes/no" respectively but are approximate and smooth. Consequently, resulted behaviour will transition smoother, especially in the case of transitions between two tasks. However, evaluation of the fuzzy conditions is much slower than evaluation of their crisp counterparts.
See thearchitecture of Alex Champandard.
Reactive plans can be expressed also byconnectionist networkslikeartificial neural networksor free-flow hierarchies. The basic representational unit is a unit with several input links that feed the unit with "an abstract activity" and output links that propagate the activity to following units. Each unit itself works as the activity transducer. Typically, the units are connected in a layered structure.
Positives of connectionist networks is, first, that the resulted behaviour is more smooth than behaviour produced by crisp if-then rules and FSMs, second, the networks are often adaptive, and third, mechanism of inhibition can be used and hence, behaviour can be also described proscriptively (by means of rules one can describe behaviour only prescriptively). However, the methods have also several flaws. First, for a designer, it is much more complicated to describe behaviour by a network comparing with if-then rules. Second, only relatively simple behaviour can be described, especially if adaptive feature is to be exploited.
Typical reactive planning algorithm just evaluates if-then rules or computes the state of a connectionist network. However, some algorithms have special features.
Steering is a special reactive technique used in navigation of agents. The simplest form of reactive steering is employed inBraitenberg vehicles, which map sensor inputs directly to effector outputs, and can follow or avoid. More complex systems are based on a superposition of attractive or repulsive forces that effect on the agent. This kind of steering is based on the original work onboidsof Craig Reynolds.
By means of steering, one can achieve a simple form of:
The advantage of steering is that it is computationally very efficient. Incomputer games, hundreds ofNPCscan be driven by this technique. In cases of more complicated terrain (e.g. a building), however, steering must be combined withpath-finding(as e.g. in Milani[1]), which is a form ofplanning.
|
https://en.wikipedia.org/wiki/Reactive_planning
|
Ingame theory, amove,action, orplayis any one of the options which a player can choose in a setting where the optimal outcome dependsnot onlyon their own actionsbuton the actions of others.[1]The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.[2]
The termstrategyis typically used to mean a completealgorithmfor playing a game, telling a player what to do for every possible situation. A player's strategy determines the action the player will take at any stage of the game. However, the idea of a strategy is often confused orconflatedwith that of a move or action, because of the correspondence between moves andpure strategiesinmost games: for any moveX, "always play moveX" is an example of a valid strategy, and as a result every move can also be considered to be a strategy. Other authors treat strategies as being a different type of thing from actions, and therefore distinct.
It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B does not enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff.
Astrategy profile(sometimes called astrategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player.
A player'sstrategy setdefines what strategies are available for them to play.
A player has afinitestrategy set if they have a number of discrete strategies available to them. For instance, a game ofrock paper scissorscomprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}.
A strategy set is infinite otherwise. For instance thecake cutting gamehas a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}.
In adynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to arobotoragenton how to play the game. For instance, in theultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject.
In aBayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information.
In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution.
For instance, strictly speaking in the Ultimatum game a player can have strategies such as:Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤x, accept any offer >x; forxin ($0, $1, $2, ..., $20)}.
Apure strategyprovides a complete and deterministic plan for how a player will act in every possible situation in a game. It specifies exactly what action the player will take at each decision point, given any information they may have. A player'sstrategy setconsists of all the pure strategies available to them.
Amixed strategyis a probability distribution over the set of pure strategies. Rather than committing to a single course of action, the player randomizes among pure strategies according to specified probabilities. Mixed strategies are particularly useful in games where no pure strategy constitutes a best response, allowing players to avoid being predictable. Since the outcomes depend on probabilities, we refer to the resulting payoffs asexpected payoffs.
A pure strategy can be viewed as a special case of a mixed strategy—one in which a single pure strategy is chosen with probability 1, and all others with probability 0.
Atotally mixed strategyis a mixed strategy in whicheverypure strategy in the player's strategy set is assigned a strictly positive probability—that is, no pure strategy is excluded or played with zero probability. This means the player randomizes acrossallof their options, never fully ruling any one out. Totally mixed strategies are important in some advanced game theory concepts liketrembling hand perfect equilibrium, where the idea is to model players as occasionally making small mistakes. In that context, assigning positive probability to every strategy—even suboptimal ones—helps capture how players might still end up choosing them due to small "trembles" in decision-making.
In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002).[3]It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie).
This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1.
The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3).
In equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also, in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability.
Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies.
In his famous paper,John Forbes Nashproved that there is anequilibriumfor every finite game. One can divide Nash equilibria into two types.Pure strategy Nash equilibriaare Nash equilibria where all players are playing pure strategies.Mixed strategy Nash equilibriaare equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, seeMatching pennies. However, many games do have pure strategy Nash equilibria (e.g. theCoordination game, thePrisoner's dilemma, theStag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2.
During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability.[4][5]Game theoristAriel Rubinsteindescribes alternative ways of understanding the concept. The first, due to Harsanyi (1973),[6]is calledpurification, and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors.[5]A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents.
Later, Aumann and Brandenburger (1995),[7]re-interpreted Nash equilibrium as an equilibrium inbeliefs, rather than actions. For instance, inrock paper scissorsan equilibrium in beliefs would have each playerbelievingthe other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player toactuallyplay a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy.
While a mixed strategy assigns a probability distribution over pure strategies, abehavior strategyassigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through thegame tree, while a behavior strategy can be seen as a stochastic path.
The relationship between mixed and behavior strategies is the subject ofKuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true.
A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997)[full citation needed]with theirAbsent-Minded Drivergame.
Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy.[8]
Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in theAbsent-Minded Drivergame formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game.
Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in theAbsent-minded Drivergame. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games.[9]
|
https://en.wikipedia.org/wiki/Strategy_(game_theory)
|
Arduino(/ɑːrˈdwiːnoʊ/) is an Italianopen-source hardwareandsoftwarecompany, project, and user community that designs and manufacturessingle-board microcontrollersandmicrocontrollerkits for building digital devices. Its hardware products are licensed under aCC BY-SA license, while the software is licensed under theGNU Lesser General Public License(LGPL) or theGNU General Public License(GPL),[1]permitting themanufactureof Arduino boards and software distribution by anyone. Arduino boards are available commercially from the officialwebsiteor through authorized distributors.[2]
Arduino board designs use a variety ofmicroprocessorsand controllers. The boards are equipped with sets of digital and analoginput/output(I/O) pins that may be interfaced to various expansion boards ('shields') orbreadboards(for prototyping) and other circuits. The boards feature serial communications interfaces, includingUniversal Serial Bus(USB) on some models, which are also used for loading programs. The microcontrollers can be programmed using theCandC++programming languages(Embedded C), using a standard API which is also known as theArduino Programming Language, inspired by theProcessing languageand used with a modified version of the Processing IDE. In addition to using traditionalcompilertoolchains, the Arduino project provides anintegrated development environment(IDE) and a command line tool developed inGo.
The Arduino project began in 2005 as a tool for students at theInteraction Design Institute Ivrea, Italy,[3]aiming to provide a low-cost and easy way for novices and professionals to create devices that interact with their environment usingsensorsandactuators. Common examples of such devices intended for beginner hobbyists include simplerobots,thermostats, andmotion detectors.
The nameArduinocomes from a café inIvrea, Italy, where some of the project's founders used to meet. The bar was named afterArduin of Ivrea, who was themargraveof theMarch of IvreaandKing of Italyfrom 1002 to 1014.[4]
The Arduino project was started at theInteraction Design Institute Ivrea(IDII) inIvrea, Italy.[3]At that time, the students used aBASIC Stampmicrocontrollerat a cost of $50. In 2004,Hernando Barragáncreated the development platformWiringas a Master's thesis project at IDII, under the supervision of Massimo Banzi andCasey Reas. Casey Reas is known for co-creating, with Ben Fry, theProcessingdevelopment platform. The project goal was to create simple, low cost tools for creating digital projects by non-engineers. The Wiring platform consisted of aprinted circuit board(PCB) with anATmega128 microcontroller, an IDE based on Processing and library functions to easily program the microcontroller.[5]In 2005, Massimo Banzi, with David Mellis, another IDII student, and David Cuartielles, extended Wiring by adding support for the cheaper ATmega8 microcontroller. The new project, forked from Wiring, was calledArduino.[5]
The initial Arduino core team consisted of Massimo Banzi, David Cuartielles, Tom Igoe, Gianluca Martino, and David Mellis.[3]
Following the completion of the platform, lighter and less expensive versions were distributed in the open-source community. It was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced,[6]and in 2013 that 700,000 official boards were in users' hands.[7]
In early 2008, the five co-founders of the Arduino project created a company, Arduino LLC,[8]to hold the trademarks associated with Arduino. The manufacture and sale of the boards were to be done by external companies, and Arduino LLC would get a royalty from them. The founding bylaws of Arduino LLC specified that each of the five founders transfer ownership of the Arduino brand to the newly formed company.[citation needed]
At the end of 2008, Gianluca Martino's company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other co-founders for about two years. This was revealed when the Arduino company tried to register the trademark in other areas of the world (they originally registered only in the US), and discovered that it was already registered in Italy. Negotiations with Martino and his firm to bring the trademark under the control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties. They then appointed a new CEO, Federico Musto, who renamed the companyArduino SRLand created the websitearduino.org, copying the graphics and layout of the originalarduino.cc. This resulted in a rift in the Arduino development team.[9][10][11]
In January 2015, Arduino LLC filed a lawsuit against Arduino SRL.[12]
In May 2015, Arduino LLC created the worldwide trademarkGenuino, used as brand name outside the United States.[13]
At the WorldMaker Fairein New York on 1 October 2016, Arduino LLC co-founder and CEO Massimo Banzi and Arduino SRL CEO Federico Musto announced the merger of the two companies, forming Arduino AG.[14]Around that same time, Massimo Banzi announced that in addition to the company a new Arduino Foundation would be launched as "a new beginning for Arduino", but this decision was withdrawn later.[15][16][17]
In April 2017,Wiredreported that Musto had "fabricated his academic record... On his company's website, personal LinkedIn accounts, and even on Italian business documents, Musto was, until recently, listed as holding a Ph.D. from the Massachusetts Institute of Technology. In some cases, his biography also claimed an MBA from New York University." Wired reported that neither university had any record of Musto's attendance, and Musto later admitted in an interview with Wired that he had never earned those degrees.[18]The controversy surrounding Musto continued when, in July 2017, he reportedly pulled manyopen sourcelicenses, schematics, and code from the Arduino website, prompting scrutiny and outcry.[19]
By 2017 Arduino 'AG' owned many Arduino trademarks. In July 2017 BCMI, founded by Massimo Banzi, David Cuartielles, David Mellis and Tom Igoe, acquired Arduino AG and all the Arduino trademarks. Fabio Violante is the new CEO replacing Federico Musto, who no longer works for Arduino AG.[20][21]
In October 2017, Arduino announced its partnership withArm Holdings(ARM). The announcement said, in part, "ARM recognized independence as a core value of Arduino ... without any lock-in with theARM architecture". Arduino intends to continue to work with all technology vendors and architectures.[22]Under Violante's guidance, the company started growing again and releasing new designs. The Genuino trademark was dismissed and all products were branded again with the Arduino name.
In August 2018, Arduino announced its new open source command line tool (arduino-cli), which can be used as a replacement of the IDE to program the boards from a shell.[23]
In February 2019, Arduino announced its IoT Cloud service as an extension of the Create online environment.[24]
As of February 2020, the Arduino community included about 30 million active users based on the IDE downloads.[25]
Arduino isopen-source hardware. The hardware reference designs are distributed under aCreative CommonsAttribution Share-Alike 2.5 license and are available on the Arduino website. Layout and production files for some versions of the hardware are also available.
Although the hardware and software designs are freely available undercopyleftlicenses, the developers have requested the nameArduinoto beexclusive to the official productand not be used for derived works without permission. The official policy document on the use of the Arduino name emphasizes that the project is open to incorporating work by others into the official product.[26]Several Arduino-compatible products commercially released have avoided the project name by using various names ending in-duino.[27]
Most Arduino boards consist of anAtmel8-bitAVR microcontroller(ATmega8,[29]ATmega168,ATmega328, ATmega1280, or ATmega2560) with varying amounts of flash memory, pins, and features.[30]The 32-bitArduino Due, based on the AtmelSAM3X8Ewas introduced in 2012.[31]The boards use single or double-row pins or female headers that facilitate connections for programming and incorporation into other circuits. These may connect with add-on modules termedshields. Multiple and possibly stacked shields may be individually addressable via anI²Cserial bus. Most boards include a 5 Vlinear regulatorand a 16 MHzcrystal oscillatororceramic resonator. Some designs, such as the LilyPad,[32]run at 8 MHz and dispense with the onboard voltage regulator due to specificform factorrestrictions.
Arduino microcontrollers are pre-programmed with abootloaderthat simplifies the uploading of programs to the on-chipflash memory. The default bootloader of the Arduino Uno is the Optiboot bootloader.[33]Boards are loaded with program code via a serial connection to another computer. Some serial Arduino boards contain alevel shiftercircuit to convert betweenRS-232logic levels andtransistor–transistor logic(TTL serial) level signals. Current Arduino boards are programmed viaUniversal Serial Bus(USB), implemented using USB-to-serial adapter chips such as theFTDIFT232. Some boards, such as later-model Uno boards, substitute theFTDIchip with a separate AVR chip containing USB-to-serial firmware, which is reprogrammable via its ownICSPheader. Other variants, such as the Arduino Mini and the unofficial Boarduino, use a detachable USB-to-serial adapter board or cable,Bluetoothor other methods. When used with traditional microcontroller tools, instead of the Arduino IDE, standard AVRin-system programming(ISP) programming is used.
The Arduino board exposes most of the microcontroller's I/O pins for use by other circuits. TheDiecimila,[a]Duemilanove,[b]and currentUno[c]provide 14 digital I/O pins, six of which can producepulse-width modulatedsignals, and six analog inputs, which can also be used as six digital I/O pins. These pins are on the top of the board, via female 0.1-inch (2.54 mm) headers. Several plug-in application shields are also commercially available. The Arduino Nano and Arduino-compatible Bare Bones Board[34]and Boarduino[35]boards may provide male header pins on the underside of the board that can plug into solderlessbreadboards.
Many Arduino-compatible and Arduino-derived boards exist. Some are functionally equivalent to an Arduino and can be used interchangeably. Many enhance the basic Arduino by adding output drivers, often for use in school-level education,[36]to simplify making buggies and small robots. Others are electrically equivalent, but change the form factor, sometimes retaining compatibility with shields, sometimes not. Some variants use different processors, of varying compatibility.
In addition to hardware variations,open sourcelibraries have been developed to support Arduino hardware inEDAtools. One such project providesKiCadschematic symbols andPCBfootprints for Arduino modules, expansion boards, and connectors, making it easier for engineers to integrate Arduino into their designs.[37]
The original Arduino hardware was manufactured by the Italian company Smart Projects.[38]Some Arduino-branded boards have been designed by the American companiesSparkFun ElectronicsandAdafruit Industries.[39]As of 2016[update], 17 versions of the Arduino hardware have been commercially produced.
Arduino and Arduino-compatible boards use printed circuit expansion boards calledshields, which plug into the normally supplied Arduino pin headers.[56]Shields can provide motor controls for3D printingand other applications,GNSS(satellite navigation), Ethernet,liquid crystal display(LCD), or breadboarding (prototyping). Several shields can also be madedo it yourself(DIY).[57][58][59]
A program for Arduino hardware may be written in anyprogramming languagewith compilers that produce binary machine code for the target processor. Atmel provides a development environment for their 8-bitAVRand 32-bitARM Cortex-Mbased microcontrollers: AVR Studio (older) and Atmel Studio (newer).[60][61][62]
The Arduinointegrated development environment(IDE) is across-platformapplication (forMicrosoft Windows,macOS, andLinux) that is based onProcessing IDEwhich is written inJava. It uses theWiringAPI as programming style andHAL. It includes a code editor with features such as text cutting and pasting, searching and replacing text, automatic indenting,brace matching, andsyntax highlighting, and provides simpleone-clickmechanisms to compile and upload programs to an Arduino board. It also contains a message area, a text console, a toolbar with buttons for common functions and a hierarchy of operation menus. The source code for the IDE is released under theGNU General Public License, version 2.[64]
The Arduino IDE supports the languagesCandC++using special rules of code structuring. The Arduino IDE supplies asoftware libraryfrom theWiringproject, which provides many common input and output procedures. User-written code only requires two basic functions, for starting the sketch and the main program loop, that are compiled and linked with a program stubmain()into an executablecyclic executiveprogram with theGNU toolchain, also included with the IDE distribution. The Arduino IDE employs the programavrdudeto convert the executable code into a text file inhexadecimalencoding that is loaded into the Arduino board by a loader program in the board's firmware. Traditionally, Arduino IDE was used to program Arduino's official boards based on Atmel AVR Microcontrollers, but over time, once the popularity of Arduino grew and the availability of open-source compilers existed, many more platforms fromPIC,STM32,TI MSP430,ESP32can be coded using Arduino IDE.[65]
An initial alpha preview of a new Arduino IDE was released on October 18, 2019, as the Arduino Pro IDE. The beta preview was released on March 1, 2021, renamed IDE 2.0. On September 14, 2022, the Arduino IDE 2.0 was officially released as stable.[67]
The system still uses Arduino CLI (Command Line Interface), but improvements include a more professional development environment and autocompletion support.[68]The application frontend is based on theEclipse TheiaOpen Source IDE. Its main new features are:[69]
One important feature Arduino IDE 2.0 provides is the debugging feature.[70]It allows users to single-step, insert breakpoints or view memory. Debugging requires a target chip withdebug portand a debug probe. The official Arduino Zero board can be debugged out of the box. Other official Arduino SAMD21 boards require a separate SEGGER J-Link or Atmel-ICE.
For a 3rd party board, debugging in Arduino IDE 2.0 is also possible as long as such board supports GDB, OPENOCD and has a debug probe. Community has contributed debugging for ATMega328P based Arduino[71]or CH32 RiscV Boards,[72]etc.
Asketchis a program written with the Arduino IDE.[73]Sketches are saved on the development computer as text files with the file extension.ino. Arduino Software (IDE) pre-1.0 saved sketches with the extension.pde.
A minimal Arduino C/C++ program consists of only two functions:[74]
Most Arduino boards contain alight-emitting diode(LED) and a current-limiting resistor connected between pin 13 and ground, which is a convenient feature for many tests and program functions.[77]A typical program used by beginners, akin toHello, World!, is "blink", which repeatedly blinks the on-board LED integrated into the Arduino board. This program uses the functionspinMode(),digitalWrite(), anddelay(), which are provided by the internal libraries included in the IDE environment.[78][79][80]This program is usually loaded into a new Arduino board by the manufacturer.
Sweep exampleSweeping a servo with an Arduino means moving it back and forth across a specified range of motion. This is commonly done using theServolibrary in Arduino. To sweep a servo with an Arduino, connect theservo's VCC (red wire) to 5V,GND (black/brown) to GND, andsignal (yellow/white) to a PWM-capable pin (e.g., Pin 9). Use theServolibrary to control movement. The code below gradually moves the servo from 0° to 180° and back in a loop.
The open-source nature of the Arduino project has facilitated the publication of many free software libraries that other developers use to augment their projects.
There is aXinuOS port for the ATmega328P (Arduino Uno and others with the same chip), which includes most of the basic features.[81]The source code of this version is freely available.[82]
There is also a threading tool, named Protothreads. Protothreads are described as "extremely lightweight stackless threads designed for severely memory constrained systems, such as small embedded systems or wireless sensor network nodes.[83]
There is a port of FreeRTOS for the Arduino.[84]This is available from the Arduino Library Manager. It is compatible with a number of boards, including the Uno.
The Arduino project received an honorary mention in the Digital Communities category at the 2006Prix Ars Electronica.[89]
The Arduino Engineering Kit won the Bett Award for "Higher Education or Further Education Digital Services" in 2020.[90]
|
https://en.wikipedia.org/wiki/Arduino
|
Incomputer science, theevent loop(also known asmessage dispatcher,message loop,message pump, orrun loop) is a programming construct ordesign patternthat waits for and dispatcheseventsormessagesin aprogram. The event loop works by making a request to some internal or external "event provider" (that generallyblocksthe request until an event has arrived), then calls the relevantevent handler("dispatches the event").
It is also commonly implemented in servers such asweb servers.
The event-loop may be used in conjunction with areactor, if the event provider follows thefile interface, which can be selected or 'polled' (the Unix system call, not actualpolling). The event loop almost always operates asynchronously with the message originator.
When the event loop forms the centralcontrol flowconstruct of a program, as it often does, it may be termed themain loopormain event loop. This title is appropriate, because such an event loop is at the highest level of control within the program.
Message pumps are said to 'pump' messages from the program'smessage queue(assigned and usually owned by the underlying operating system) into the program for processing. In the strictest sense, an event loop is one of the methods for implementinginter-process communication. In fact, message processing exists in many systems, including akernel-levelcomponent of theMach operating system. The event loop is a specific implementation technique of systems that usemessage passing.
This approach is in contrast to a number of other alternatives:
Due to the predominance ofgraphical user interfaces, most modern applications feature a main loop. Theget_next_message()routine is typically provided by the operating system, andblocksuntil a message is available. Thus, the loop is only entered when there is something to process.
UnderUnix, the "everything is a file" paradigm naturally leads to a file-based event loop. Reading from and writing to files, inter-process communication, network communication, and device control are all achieved using file I/O, with the target identified by afile descriptor. Theselectandpollsystem calls allow a set of file descriptors to be monitored for a change of state, e.g. when data becomes available to be read.
For example, consider a program that reads from a continuously updated file and displays its contents in theX Window System, which communicates with clients over a socket (eitherUnix domainorBerkeley):
One of the few things in Unix that does not conform to the file interface are asynchronous events (signals). Signals are received insignal handlers, small, limited pieces of code that run while the rest of the task is suspended; if a signal is received and handled while the task is blocking inselect(), select will return early withEINTR; if a signal is received while the task isCPU bound, the task will be suspended between instructions until the signal handler returns.
Thus an obvious way to handle signals is for signal handlers to set a global flag and have the event loop check for the flag immediately before and after theselect()call; if it is set, handle the signal in the same manner as with events on file descriptors. Unfortunately, this gives rise to arace condition: if a signal arrives immediately between checking the flag and callingselect(), it will not be handled untilselect()returns for some other reason (for example, being interrupted by a frustrated user).
The solution arrived at byPOSIXis thepselect()call, which is similar toselect()but takes an additionalsigmaskparameter, which describes asignal mask. This allows an application to mask signals in the main task, then remove the mask for the duration of theselect()call such that signal handlers are only called while the application isI/O bound. However, implementations ofpselect()have not always been reliable; versions of Linux prior to 2.6.16 do not have apselect()system call,[1]forcingglibcto emulate it via a method prone to the very same race conditionpselect()is intended to avoid.
An alternative, more portable solution, is to convert asynchronous events to file-based events using theself-pipe trick,[2]where "a signal handler writes a byte to a pipe whose other end is monitored byselect()in the main program".[3]InLinux kernelversion 2.6.22, a new system callsignalfd()was added, which allows receiving signals via a special file descriptor.
A web page and its JavaScript typically run in a single-threadedweb browserprocess. The browser process deals withmessagesfrom aqueueone at a time. A JavaScriptfunctionor another browser event might be associated with a given message. When the browser process has finished with a message, it proceeds to the next message in the queue.
On theMicrosoft Windowsoperating system, a process that interacts with the usermustaccept and react to incoming messages, which is almost inevitably done by amessage loopin that process. In Windows, a message is equated to an event created and imposed upon the operating system. An event can be user interaction, network traffic, system processing, timer activity, inter-process communication, among others. For non-interactive, I/O only events, Windows hasI/O completion ports. I/O completion port loops run separately from the Message loop, and do not interact with the Message loop out of the box.
The "heart" of mostWin32applicationsis theWinMain()function, which callsGetMessage()in a loop. GetMessage() blocks until a message, or "event", is received (with functionPeekMessage()as a non-blocking alternative). After some optional processing, it will callDispatchMessage(), which dispatches the message to the relevant handler, also known asWindowProc. Normally, messages that have no specialWindowProc()are dispatched toDefWindowProc, the default one. DispatchMessage() calls the WindowProc of theHWNDhandleof the message (registered with theRegisterClass()function).
More recent versions of Microsoft Windows guarantee to the programmer that messages will be delivered to an application's message loop in the order that they were perceived by the system and its peripherals. This guarantee is essential when considering the design consequences ofmultithreadedapplications.
However, some messages have different rules, such as messages that are always received last, or messages with a different documented priority.[4]
Xapplications usingXlibdirectly are built around theXNextEventfamily of functions;XNextEventblocks until an event appears on the event queue, whereupon the application processes it appropriately. The Xlib event loop only handles window system events; applications that need to be able to wait on other files and devices could construct their own event loop from primitives such asConnectionNumber, but in practice tend to usemultithreading.
Very few programs use Xlib directly. In the more common case, GUI toolkits based on Xlib usually support adding events. For example, toolkits based onXt IntrinsicshaveXtAppAddInput()andXtAppAddTimeout().
It is not safe to call Xlib functions from a signal handler, because the X application may have been interrupted in an arbitrary state, e.g. withinXNextEvent. See[1]for a solution for X11R5, X11R6 and Xt.
TheGLibevent loop was originally created for use inGTKbut is now used in non-GUI applications as well, such asD-Bus. The resource polled is the collection offile descriptorsthe application is interested in; the polling block will be interrupted if asignalarrives or atimeoutexpires (e.g. if the application has specified a timeout or idle task). While GLib has built-in support for file descriptor and child termination events, it is possible to add an event source for any event that can be handled in a prepare-check-dispatch model.[2]
Application libraries that are built on the GLib event loop includeGStreamerand theasynchronous I/Omethods ofGnomeVFS, butGTKremains the most visible client library. Events from thewindowing system(inX, read off the Xsocket) are translated byGDKinto GTK events and emitted as GLib signals on the application's widget objects.
Exactly one CFRunLoop is allowed per thread, and arbitrarily many sources and observers can be attached. Sources then communicate with observers through the run loop, with it organising queueing and dispatch of messages.
The CFRunLoop is abstracted inCocoaas an NSRunLoop, which allows any message (equivalent to a function call in non-reflectiveruntimes) to be queued for dispatch to any object.
|
https://en.wikipedia.org/wiki/Event_loop
|
Incomputing,preemptionis the act performed by an externalscheduler— without assistance or cooperation from the task — of temporarilyinterruptinganexecutingtask, with the intention of resuming it at a later time.[1]: 153This preemptive scheduler usually runs in the most privilegedprotection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of aprocessorare known ascontext switching.
In any given system design, some operations performed by the system may not be preemptable. This usually applies tokernelfunctions and serviceinterruptswhich, if not permitted torun to completion, would tend to producerace conditionsresulting indeadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense ofsystem responsiveness. The distinction betweenuser modeandkernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems havepreemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems areSolaris2.0/SunOS 5.0,[2]Windows NT,Linux kernel(2.5.4 and newer),[3]AIXand someBSDsystems (NetBSD, since version 5).
The termpreemptive multitaskingis used to distinguish amultitasking operating system, which permits preemption of tasks, from acooperative multitaskingsystem wherein processes or tasks must be explicitly programmed toyieldwhen they do not need system resources.
In simple terms: Preemptive multitasking involves the use of aninterrupt mechanismwhich suspends the currently executing process and invokes aschedulerto determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating systemkernelcan also initiate acontext switchto satisfy thescheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling.
The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known astime-shared scheduling, ortime-sharing.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-usercontrol systems(like those inrobotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
The period of time for which a process is allowed to run in a preemptive multitasking system is generally called thetime sliceorquantum.[1]: 158The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler itself will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input.
Aninterruptis scheduled to allow theoperating systemkernelto switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system.
Today, nearly all operating systems support preemptive multitasking, including the current versions ofWindows,macOS,Linux(includingAndroid),iOSandiPadOS.
An early microcomputer operating system providing preemptive multitasking wasMicroware'sOS-9, available for computers based on theMotorola 6809, including home computers such as theTRS-80 Color Computer 2when configured with disk drives,[4]with the operating system supplied by Tandy as an upgrade.[5]Sinclair QDOS[6]:18andAmigaOSon theAmigawere also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran onMotorola 68000-familymicroprocessorswithout memory management. Amiga OS useddynamic loadingof relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space.
Early operating systems forIBM PC compatiblessuch asMS-DOSandPC DOS, did not support multitasking at all, however alternative operating systems such asMP/M-86(1981) andConcurrent CP/M-86did support preemptive multitasking. OtherUnix-likesystems includingMINIXandCoherentprovided preemptive multitasking on 1980s-era personal computers.
LaterMS-DOScompatible systems natively supporting preemptive multitasking/multithreading includeConcurrent DOS,Multiuser DOS,Novell DOS(later calledCaldera OpenDOSandDR-DOS7.02 and higher). SinceConcurrent DOS 386, they could also run multiple DOS programs concurrently invirtual DOS machines.
The earliest version of Windows to support a limited form of preemptive multitasking wasWindows/386 2.0, which used theIntel 80386'sVirtual 8086 modeto run DOS applications invirtual 8086 machines, commonly known as "DOS boxes", which could be preempted. InWindows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility.[7]In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space.
Preemptive multitasking has always been supported byWindows NT(all versions),OS/2(native applications),UnixandUnix-likesystems (such asLinux,BSDandmacOS),VMS,OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets.
Early versions of theclassic Mac OSdid not support multitasking at all, with cooperative multitasking becoming available viaMultiFinderinSystem Software 5and then standard inSystem 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist inMac OS 9, although in a limited sense[8]), these were abandoned in favor ofMac OS X (now called macOS)that, as a hybrid of the old Mac System style andNeXTSTEP, is an operating system based on theMachkernel and derived in part fromBSD, which had always provided Unix-like preemptive multitasking.
|
https://en.wikipedia.org/wiki/Preemption_(computing)
|
Earliest deadline first(EDF) orleast time to gois a dynamic priorityscheduling algorithmused inreal-time operating systemsto place processes in apriority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution.
EDF is anoptimalscheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of independentjobs,each characterized by an arrival time, an execution requirement and a deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will schedule this collection of jobs so they all complete by their deadline.
With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%. Thus, the schedulability test[1]for EDF is:
where the{Ci}{\displaystyle \left\{C_{i}\right\}}are the worst-case computation-times of then{\displaystyle n}processes and the{Ti}{\displaystyle \left\{T_{i}\right\}}are their respective inter-arrival periods (assumed to be equal to the relative deadlines).[2]
That is, EDF can guarantee that all deadlines are met provided that the totalCPUutilization is not more than 100%. Compared to fixed-priority scheduling techniques likerate-monotonic scheduling, EDF can guarantee all the deadlines in the system at higher loading.
Note that use the schedulability test formula under deadline as period. When deadline is less than period, things are different. Here is an example: The four periodic tasks needs scheduling, where each task is depicted as TaskNo( computation time, relative deadline, period). They are T0(5,13,20), T1(3,7,11), T2(4,6,10) and T3(1,1,20). This task group meets utilization is no greater than 1.0, where utilization is calculated as 5/20+3/11+4/10+1/20 = 0.97 (two digits rounded), but it's still unscheduable, checkEDF Scheduling Failurefigure for details.
EDF is also anoptimalscheduling algorithm on non-preemptive uniprocessors, but only among the class of scheduling algorithms that do not allow inserted idle time. When scheduling periodic processes that have deadlines equal to their periods, a sufficient (but not necessary) schedulability test for EDF becomes:[3]
Whereprepresents the penalty for non-preemption, given bymax{Ci}{\displaystyle \left\{C_{i}\right\}}/min{Ti}{\displaystyle \left\{T_{i}\right\}}. If this factor can be kept small, non-preemptive EDF can be beneficial as it has low implementation overhead.
However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable (it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable disadvantage to a real time systems designer. The algorithm is also difficult to implement inhardwareand there is a tricky issue of representing deadlines in different ranges (deadlines can not be more precise than the granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines relative to now, the field storing a future relative deadline must accommodate at least the value of the (("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly found in industrial real-time computer systems.
Instead, most real-time computer systems usefixed-priority scheduling(usuallyrate-monotonic scheduling). With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss deadlines, while the highest-priority process will still meet its deadline.
There is a significant body of research dealing with EDF scheduling inreal-time computing; it is possible to calculate worst case response times of processes in EDF, to deal with other types of processes than periodic processes and to use servers to regulate overloads.
Consider 3 periodic processes scheduled on a preemptive uniprocessor. The execution times and periods are as shown in the following table:
In this example, the units of time may be considered to be schedulabletime slices. The deadlines are that each periodic process must complete within its period.
In the timing diagram, the columns represent time slices with time increasing to the right, and the processes all start their periods at time slice 0. The timing diagram's alternating blue and white shading indicates each process's periods, with deadlines at the color changes.
The first process scheduled by EDF is P2, because its period is shortest, and therefore it has the earliest deadline. Likewise, when P2 completes, P1 is scheduled, followed by P3.
At time slice 5, both P2 and P3 have the same deadline, needing to complete before time slice 10, so EDF may schedule either one.
The utilization will be:
(18+25+410)=(3740)=0.925=92.5%{\displaystyle \left({\frac {1}{8}}+{\frac {2}{5}}+{\frac {4}{10}}\right)=\left({\frac {37}{40}}\right)=0.925={\mathbf {92.5\%} }}
Since theleast common multipleof the periods is 40, the scheduling pattern can repeat every 40 time slices. But, only 37 of those 40 time slices are used by P1, P2, or P3. Since the utilization, 92.5%, is not greater than 100%, the system is schedulable with EDF.
Undesirable deadline interchanges may occur with EDF scheduling. A process may use a shared resource inside acritical section, to prevent it from being pre-emptively descheduled in favour of another process with an earlier deadline. If so, it becomes important for the scheduler to assign the running process the earliest deadline from among the other processes waiting for the resource. Otherwise the processes with earlier deadlines might miss them.
This is especially important if the process running the critical section has a much longer time to complete and exit from its critical section, which will delay releasing the shared resource. But the process might still be pre-empted in favour of others that have earlier deadlines but do not share the critical resource. This hazard of deadline interchange is analogous topriority inversionwhen usingfixed-priority pre-emptive scheduling.
To speed up the deadline search within the ready queue, the queue entries be sorted according to their deadlines. When a new process or a periodic process is given a new deadline, it is inserted before the first process with a later deadline. This way, the processes with the earliest deadlines are always at the beginning of the queue.
In a heavy-traffic analysis of the behavior of a single-server queue under an earliest-deadline-first scheduling policy with reneging,[4]the processes have deadlines and are served only until their deadlines elapse. The fraction of "reneged work", defined as the residual work not serviced due to elapsed deadlines, is an important performance measure.
It is commonly accepted that an implementation offixed-priority pre-emptive scheduling(FPS) is simpler than a dynamic priority scheduler, like the EDF. However, when comparing the maximum usage of an optimal scheduling under fixed priority (with the priority of each thread given by therate-monotonic scheduling), the EDF can reach 100% while the theoretical maximum value for rate-monotonic scheduling is around 69%. In addition, the worst-case overhead of an EDF implementation (fully preemptive or limited/non-preemptive) for periodic and/or sporadic tasks can be made proportional to the logarithm of the largest time representation required by a given system (to encode deadlines and periods) using Digital Search Trees.[5]In practical cases, such as embedded systems using a fixed, 32-bit representation of time, scheduling decisions can be made using this implementation in a small fixed-constant time which is independent of the number of system tasks. In such situations experiments have found little discernible difference in overhead between the EDF and FPS, even for task sets of (comparatively) large cardinality.[5]
Note that EDF does not make any specific assumption on the periodicity of the tasks; hence, it can be used for scheduling periodic as well as aperiodic tasks.[2]
Earliest Deadline First(EDF) scheduling finds its most significant applications inreal-time systemswhere missing deadlines can lead to critical consequences. These domains typically require deterministic timing guarantees:
Although EDF implementations are not common in commercial real-time kernels, here are a few links of open-source and real-time kernels implementing EDF:
|
https://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling
|
Anoperating system(OS) issystem softwarethat managescomputer hardwareandsoftwareresources, and provides commonservicesforcomputer programs.
Time-sharingoperating systemsschedule tasksfor efficient use of the system and may also include accounting software for cost allocation ofprocessor time,mass storage, peripherals, and other resources.
For hardware functions such asinput and outputandmemory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2]although the application code is usually executed directly by the hardware and frequently makessystem callsto an OS function or isinterruptedby it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles toweb serversandsupercomputers.
As of September 2024[update],Androidis the most popular operating system with a 46% market share, followed byMicrosoft Windowsat 26%,iOSandiPadOSat 18%,macOSat 5%, andLinuxat 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems.[3]Linux distributionsare dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems),[4][5]such asembeddedand real-time systems, exist for many applications.Security-focused operating systemsalso exist. Some operating systems have low system requirements (e.g.light-weight Linux distribution). Others may have higher system requirements.
Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e.live CD) or flash memory (i.e. a LiveUSB from aUSBstick).
An operating system is difficult to define,[6]but has been called "thelayer of softwarethat manages a computer's resources for its users and theirapplications".[7]Operating systems include the software that is always running, called akernel—but can include other software as well.[6][8]The two other types of programs that can run on a computer aresystem programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software.[8]
There are three main purposes that an operating system fulfills:[9]
Withmultiprocessorsmultiple CPUs share memory. Amulticomputerorcluster computerhas multiple CPUs, each of whichhas its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive;[17]they are universal incloud computingbecause of the size of the machine needed.[18]The different CPUs often need to send and receive messages to each other;[19]to ensure good performance, the operating systems for these machines need to minimize this copying ofpackets.[20]Newer systems are oftenmultiqueue—separating groups of users into separatequeues—to reduce the need for packet copying and support more concurrent users.[21]Another technique isremote direct memory access, which enables each CPU to access memory belonging to other CPUs.[19]Multicomputer operating systems often supportremote procedure callswhere a CPU can call aprocedureon another CPU,[22]ordistributed shared memory, in which the operating system usesvirtualizationto generate shared memory that does not physically exist.[23]
Adistributed systemis a group of distinct,networkedcomputers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world.[24]Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system.[25]
Embedded operating systemsare designed to be used inembedded computer systems, whether they areinternet of thingsobjects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10kilobytes,[26]and the smallest are forsmart cards.[27]Examples includeEmbedded Linux,QNX,VxWorks, and the extra-small systemsRIOTandTinyOS.[28]
Areal-time operating systemis an operating system that guarantees to processeventsor data by or at a specific moment in time. Hard real-time systems require exact timing and are common inmanufacturing,avionics, military, and other similar uses.[28]With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones.[28]In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such aseCos.[28]
Ahypervisoris an operating system that runs avirtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware.[14][29]Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development,[30]and debugging.[31]They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system.[14]
Alibrary operating system(libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form oflibrariesand composed with a single application and configuration code to construct aunikernel:[32]a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together[33]),single address space, machine image that can be deployed to cloud or embedded environments.
The operating system code and application code are not executed in separatedprotection domains(there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentiallyinliningthem based on compiler thresholds), without the usual overhead ofcontext switches,[34]in a way similarly to embedded and real-time OSes. Note that this overhead is not negligible: to the direct cost of mode switching it's necessary to add the indirect pollution of important processor structures (likeCPU caches, theinstruction pipeline, and so on) which affects both user-mode and kernel-mode performance.[35]
The first computers in the late 1940s and 1950s were directly programmed either withplugboardsor withmachine codeinputted on media such aspunch cards, withoutprogramming languagesor operating systems.[36]After the introduction of thetransistorin the mid-1950s,mainframesbegan to be built. These still needed professional operators[36]who manually do what a modern operating system would do, such as scheduling programs to run,[37]but mainframes still had rudimentary operating systems such asFortran Monitor System(FMS) andIBSYS.[38]In the 1960s,IBMintroduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines ofassembly languagethat had thousands ofbugs. The OS/360 also was the first popular operating system to supportmultiprogramming, such that the CPU could be put to use on one job while another was waiting oninput/output(I/O). Holding multiple jobs inmemorynecessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.[39]
Around the same time,teleprintersbegan to be used asterminalsso multiple users could access the computer simultaneously. The operating systemMULTICSwas intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor tocloud computing. TheUNIXoperating system originated as a development of MULTICS for a single user.[40]Because UNIX'ssource codewas available, it became the basis of other, incompatible operating systems, of which the most successful wereAT&T'sSystem Vand theUniversity of California'sBerkeley Software Distribution(BSD).[41]To increase compatibility, theIEEEreleased thePOSIXstandard for operating systemapplication programming interfaces(APIs), which is supported by most UNIX systems.MINIXwas a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available,free softwareLinux. Since 2008, MINIX is used in controllers of mostIntelmicrochips, while Linux is widespread indata centersandAndroidsmartphones.[42]
The invention oflarge scale integrationenabled the production ofpersonal computers(initially calledmicrocomputers) from around 1980.[43]For around five years, theCP/M(Control Program for Microcomputers) was the most popular operating system for microcomputers.[44]Later, IBM bought theDOS(Disk Operating System) fromMicrosoft. After modifications requested by IBM, the resulting system was calledMS-DOS(MicroSoft Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.[44]
Apple'sMacintoshwas the first popular computer to use agraphical user interface(GUI). The GUI proved much moreuser friendlythan the text-onlycommand-line interfaceearlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay calledWindows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a largelegal settlementwas paid.[45]In the twenty-first century, Windows continues to be popular on personal computers but has lessmarket shareof servers. UNIX operating systems, especially Linux, are the most popular onenterprise systemsand servers but are also used onmobile devicesand many other computer systems.[46]
On mobile devices,Symbian OSwas dominant at first, being usurped byBlackBerry OS(introduced 2002) andiOSforiPhones(from 2007). Later on, the open-sourceAndroidoperating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular.[47]
The components of an operating system are designed to ensure that various parts of a computer function cohesively. With the de facto obsoletion ofDOS, all usersoftwaremust interact with the operating system to access hardware.
The kernel is the part of the operating system that providesprotectionbetween different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power ofmalicious softwareand protecting private data, and ensuring that one program cannot monopolize the computer's resources.[48]Most operating systems have two modes of operation:[49]inuser mode, the hardware checks that the software is only executing legal instructions, whereas the kernel hasunrestricted powersand is not subject to these checks.[50]The kernel also managesmemoryfor other processes and controls access toinput/outputdevices.[51]
The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program typically involves the creation of aprocessby the operating systemkernel, which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program, which then interacts with the user and with hardware devices. However, in some systems an application can request that the operating system execute another application within the same process, either as a subroutine or in a separate thread, e.g., theLINKandATTACHfacilities ofOS/360 and successors.
Aninterrupt(also known as anabort,exception,fault,signal,[52]ortrap)[53]provides an efficient way for most operating systems to react to the environment. Interrupts cause thecentral processing unit(CPU) to have acontrol flowchange away from the currently running program to aninterrupt handler, also known as an interrupt service routine (ISR).[54][55]An interrupt service routine may cause thecentral processing unit(CPU) to have acontext switch.[56][a]The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave vary from operating system to operating system.[57]However, several interrupt functions are common.[57]The architecture and operating system must:[57]
A software interrupt is a message to aprocessthat an event has occurred.[52]This contrasts with ahardware interrupt— which is a message to thecentral processing unit(CPU) that an event has occurred.[58]Software interrupts are similar to hardware interrupts — there is a change away from the currently running process.[59]Similarly, both hardware and software interrupts execute aninterrupt service routine.
Software interrupts may be normally occurring events. It is expected that atime slicewill occur, so the kernel will have to perform acontext switch.[60]Acomputer programmay set a timer to go off after a few seconds in case too much data causes an algorithm to take too long.[61]
Software interrupts may be error conditions, such as a malformedmachine instruction.[61]However, the most common error conditions aredivision by zeroandaccessing an invalid memory address.[61]
Userscan send messages to the kernel to modify the behavior of a currently running process.[61]For example, in thecommand-line environment, pressing theinterrupt character(usuallyControl-C) might terminate the currently running process.[61]
To generatesoftware interruptsforx86CPUs, theINTassembly languageinstruction is available.[62]The syntax isINT X, whereXis the offset number (inhexadecimalformat) to theinterrupt vector table.
To generatesoftware interruptsinUnix-likeoperating systems, thekill(pid,signum)system callwill send asignalto another process.[63]pidis theprocess identifierof the receiving process.signumis the signal number (inmnemonicformat)[b]to be sent. (The abrasive name ofkillwas chosen because early implementations only terminated the process.)[64]
In Unix-like operating systems,signalsinform processes of the occurrence of asynchronous events.[63]To communicate asynchronously, interrupts are required.[65]One reason a process needs to asynchronously communicate to another process solves a variation of the classicreader/writer problem.[66]The writer receives a pipe from theshellfor its output to be sent to the reader's input stream.[67]Thecommand-linesyntax isalpha | bravo.alphawill write to the pipe when its computation is ready and then sleep in the wait queue.[68]bravowill then be moved to theready queueand soon will read from its input stream.[69]The kernel will generatesoftware interruptsto coordinate the piping.[69]
Signalsmay be classified into 7 categories.[63]The categories are:
Input/output(I/O)devicesare slower than the CPU. Therefore, it would slow down the computer if the CPU had towaitfor each I/O to finish. Instead, a computer may implement interrupts for I/O completion, avoiding the need forpollingor busy waiting.[70]
Some computers require an interrupt for each character or word, costing a significant amount of CPU time.Direct memory access(DMA) is an architecture feature to allow devices to bypass the CPU and accessmain memorydirectly.[71](Separate from the architecture, a device may perform direct memory access[c]to and from main memory either directly or via a bus.)[72][d]
When acomputer usertypes a key on the keyboard, typically the character appears immediately on the screen. Likewise, when a user moves amouse, thecursorimmediately moves across the screen. Each keystroke and mouse movement generates aninterruptcalledInterrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character[72]or word[73]transmitted.
Devices such ashard disk drives,solid-state drives, andmagnetic tapedrives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time. Data is, instead, transferred between the device and memory independently of the CPU by hardware such as achannelor adirect memory accesscontroller; an interrupt is delivered only when all the data is transferred.[74]
If acomputer programexecutes asystem callto perform a block I/Owriteoperation, then the system call might execute the following instructions:
While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device willinterruptthe currently running process byassertinganinterrupt request. The device will also place an integer onto the data bus.[78]Upon accepting the interrupt request, the operating system will:
When the writing process has itstime sliceexpired, the operating system will:[79]
With the program counter now reset, the interrupted process will resume its time slice.[57]
Among other things, a multiprogramming operating systemkernelmust be responsible for managing all system memory which is currently in use by the programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of thekernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program tocrashthe system.
Memory protectionenables thekernelto limit a process' access to the computer's memory. Various methods of memory protection exist, includingmemory segmentationandpaging. All methods require some level of hardware support (such as the80286MMU), which does not exist in all computers.
In both segmentation and paging, certainprotected moderegisters specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt, which causes the CPU to re-entersupervisor mode, placing thekernelin charge. This is called asegmentation violationor Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, thekernelgenerally resorts to terminating the offending program, and reports the error.
Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. Ageneral protection faultwould be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that is not accessible[e]memory, but nonetheless has been allocated to it, the kernel is interrupted(see§ Memory management). This kind of interrupt is typically apage fault.
When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on a disk or other media to make that space available for use by other programs. This is calledswapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[80]
Concurrencyrefers to the operating system's ability to carry out multiple tasks simultaneously.[81]Virtually all modern operating systems support concurrency.[82]
Threadsenable splitting a process' work into multiple parts that can run simultaneously.[83]The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating systemkernelschedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives.[84]During acontext switcha running thread is suspended, its state is saved into thethread control blockand stack, and the state of the new thread is loaded in.[85]Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now caninterrupta thread (preemptive multitasking).[86]
Threads have their own thread ID,program counter(PC), aregisterset, and astack, but share code,heapdata, and other resources with other threads of the same process.[87][88]Thus, there is less overhead to create a thread than a new process.[89]On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs.[90]Parallelismwith multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently.[91]
Permanent storage devices used in twenty-first century computers, unlikevolatiledynamic random-access memory(DRAM), are still accessible after acrashorpower failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write.[92][93]The two main technologies are ahard driveconsisting ofmagnetic disks, andflash memory(asolid-state drivethat stores data in electrical circuits). The latter is more expensive but faster and more durable.[94][95]
File systemsare anabstractionused by the operating system to simplify access to permanent storage. They provide human-readablefilenamesand othermetadata, increase performance viaamortizationof accesses, prevent multiple threads from accessing the same section of memory, and includechecksumsto identifycorruption.[96]File systems are composed of files (named collections of data, of an arbitrary size) anddirectories(also called folders) that list human-readable filenames and other directories.[97]An absolutefile pathbegins at theroot directoryand listssubdirectoriesdivided by punctuation, while a relative path defines the location of a file from a directory.[98][99]
System calls(which are sometimeswrappedby libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application.[100]The operating system's efforts to reduce latency include storing recently requested blocks of memory in acacheandprefetchingdata that the application has not asked for, but might need next.[101]Device driversare software specific to eachinput/output(I/O) device that enables the operating system to work without modification over different hardware.[102][103]
Another component of file systems is adictionarythat maps a file's name and metadata to thedata blockwhere its contents are stored.[104]Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses anindex(often implemented as atree).[105]Separately, there is a free spacemapto track free blocks, commonly implemented as abitmap.[105]Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reducefragmentation.[106]
Maintaining data reliability in the face of a computer crash or hardware failure is another concern.[107]File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing.[108]Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks)[109][110]andchecksumsto detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption.[110]
Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network.[111]Operating systems security rests on achieving theCIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of adenial of service attack).[112]As with other computer systems, isolatingsecurity domains—in the case of operating systems, the kernel, processes, andvirtual machines—is key to achieving security.[113]Other ways to increase security include simplicity to minimize theattack surface, locking access to resources by default, checking all requests for authorization,principle of least authority(granting the minimum privilege essential for performing a task),privilege separation, and reducing shared data.[114]
Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with amonolithic kernellike most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design featuresmicrokernelsthat separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach.[115]Unikernelsare another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application.[115]
Most operating systems are written inCorC++, which create potential vulnerabilities for exploitation. Despite attempts to protect against them, vulnerabilities are caused bybuffer overflowattacks, which are enabled by the lack ofbounds checking.[116]Hardware vulnerabilities, some of themcaused by CPU optimizations, can also be used to compromise the operating system.[117]There are known instances of operating system programmers deliberately implanting vulnerabilities, such asback doors.[118]
Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs.[119]Becauseformal verificationof operating systems may not be feasible, developers use operating systemhardeningto reduce vulnerabilities,[120]e.g.address space layout randomization,control-flow integrity,[121]access restrictions,[122]and other techniques.[123]There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures.[124]Open source developers strive to work collaboratively to find and eliminate security vulnerabilities, usingcode reviewandtype checkingto expunge malicious code.[125][126]Andrew S. Tanenbaumadvises releasing thesource codeof all operating systems, arguing that it prevents developers from placing trust in secrecy and thus relying on the unreliable practice ofsecurity by obscurity.[127]
Auser interface(UI) is essential to support human interaction with a computer. The two most common user interface types for any computer are
For personal computers, includingsmartphonesandtablet computers, and forworkstations, user input is typically from a combination ofkeyboard,mouse, andtrackpadortouchscreen, all of which are connected to the operating system with specialized software.[128]Personal computer users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most personal computers.[129]The software to support GUIs is more complex than a command line for input and plain text output. Plain text output is often preferred by programmers, and is easy to support.[130]
A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[131]
In some cases, hobby development is in support of a "homebrew" computing device, for example, a simplesingle-board computerpowered by a6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is her/his own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
Examples of hobby operating systems includeSyllableandTempleOS.
If an application is written for use on a specific operating system, and isportedto another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwisemaintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications againstsoftware platformssuch asJavaorQt. These abstractions have already borne the cost of adaptation to specific operating systems and theirsystem libraries.
Another approach is for operating system vendors to adopt standards. For example,POSIXandOS abstraction layersprovide commonalities that reduce porting costs.
As of September 2024[update],Android(based on the Linux kernel) is the most popular operating system with a 46% market share, followed byMicrosoft Windowsat 26%,iOSandiPadOSat 18%,macOSat 5%, andLinuxat 1%. Android, iOS, and iPadOS aremobile operating systems, while Windows, macOS, and Linux are desktop operating systems.[3]
Linuxis afree softwaredistributed under theGNU General Public License(GPL), which means that all of its derivatives are legally required to release theirsource code.[132]Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy.[133]
Its design is similar to other UNIX systems not using amicrokernel.[134]It is written inC[135]and usesUNIX System Vsyntax, but also supportsBSDsyntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, whilesupporting multiple usersand employingpreemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16MBofRAM, but still is used on largemultiprocessorsystems.[134]Similar to other UNIX systems, Linuxdistributionsare composed of akernel,system libraries, andsystem utilities.[136]Linux has agraphical user interface(GUI) with a desktop, folder and file icons, as well as the option to access the operating system via acommand line.[137]
Androidis a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity onsmartphonesand, to a lesser extent,embedded systemsneeding a GUI, such as "smart watches,automotive dashboards, airplane seatbacks,medical devices, andhome appliances".[138]Unlike Linux, much of Android is written inJavaand usesobject-oriented design.[139]
Windows is aproprietaryoperating system that is widely used on desktop computers, laptops, tablets, phones,workstations,enterprise servers, andXboxconsoles.[141]The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on,energy efficiencyand support fordynamic devicesalso became priorities.[142]
Windows Executiveworks viakernel-mode objectsfor important data structures like processes, threads, and sections (memory objects, for example files).[143]The operating system supportsdemand pagingofvirtual memory, which speeds up I/O for many applications. I/Odevice driversuse theWindows Driver Model.[143]TheNTFSfile system has a master table and each file is represented as arecordwithmetadata.[144]The scheduling includespreemptive multitasking.[145]Windows has many security features;[146]especially important are the use ofaccess-control listsandintegrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.[144]
|
https://en.wikipedia.org/wiki/Operating_system
|
niceis a program found onUnixandUnix-likeoperating systemssuch asLinux. It directly maps to akernelcallof the same name.niceis used to invoke autilityorshell scriptwith a particularCPU priority, thus giving theprocessmore or less CPU time than other processes. A niceness of -20 is the lowest niceness, or highest priority. The default niceness for processes is inherited from its parent process and is usually 0.
Systems have diverged on what priority is the lowest. Linux systems document a niceness of 19 as the lowest priority,[1]BSD systems document 20 as the lowest priority.[2]In both cases, the "lowest" priority is documented as running only when nothing else wants to.
Niceness valueis a number attached to processes in *nix systems, that is used along with other data (such as the amount ofI/Odone by each process) by the kernel process scheduler to calculate a process' 'true priority'—which is used to decide how much CPU time is allocated to it.
The program's name,nice, is an allusion to its task of modifying a process' niceness value.
The termnicenessitself originates from the idea that a process with a higher niceness value isnicerto other processes in the system and to users by virtue of demanding less CPU power—freeing up processing time and power for the more demanding programs, who would in this case be lessniceto the system from a CPU usage perspective.[3]
nicebecomes useful when several processes are demanding more resources than theCPUcan provide. In this state, a higher-priority process will get a larger chunk of the CPU time than a lower-priority process. Only thesuperuser(root) may set the niceness to a lower value (i.e. a higher priority). On Linux it is possible to change/etc/security/limits.confto allow other users or groups to set low nice values.[4]
If a user wanted to compress a large file without slowing down other processes, they might run the following:
The exact mathematical effect of setting a particular niceness value for a process depends on the details of how thescheduleris designed on that implementation of Unix. A particular operating system's scheduler will also have various heuristics built into it (e.g. to favor processes that are mostly I/O-bound over processes that are CPU-bound). As a simple example, when two otherwise identical CPU-bound processes are running simultaneously on a single-CPU Linux system, each one's share of the CPU time will be proportional to 20 −p, wherepis the process' priority. Thus a process, run withnice +15, will receive 25% of the CPU time allocated to a normal-priority process: (20 − 15)/(20 − 0) = 0.25.[5]On theBSD4.x scheduler, on the other hand, the ratio in the same example is about ten to one.[citation needed]
The relatedreniceprogram can be used to change the priority of a process that is already running.[1]
Linux also has anioniceprogram, which affects scheduling of I/O rather than CPU time.[6]
|
https://en.wikipedia.org/wiki/Nice_(Unix)
|
Incomputer science, analgorithmis callednon-blockingif failure orsuspensionof anythreadcannot cause failure or suspension of another thread;[1]for some operations, these algorithms provide a useful alternative to traditionalblocking implementations. A non-blocking algorithm islock-freeif there is guaranteed system-wideprogress, andwait-freeif there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.[2]
The word "non-blocking" was traditionally used to describetelecommunications networksthat could route a connection through a set of relays "without having to re-arrange existing calls"[This quote needs a citation](seeClos network). Also, if the telephone exchange "is not defective, it can always make the connection"[This quote needs a citation](seenonblocking minimal spanning switch).
The traditional approach to multi-threaded programming is to uselocksto synchronize access to sharedresources. Synchronization primitives such asmutexes,semaphores, andcritical sectionsare all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority orreal-timetask, it would be highly undesirable to halt its progress.
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such asdeadlock,livelock, andpriority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities forparallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use ininterrupt handlers: even though thepreemptedthread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.[3]
A lock-free data structure can be used to improve performance.
A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on amulti-core processor, because access to the shared data structure does not need to be serialized to stay coherent.[4]
With few exceptions, non-blocking algorithms useatomicread-modify-writeprimitives that the hardware must provide, the most notable of which iscompare and swap (CAS).Critical sectionsare almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field ofsoftware transactional memorypromises standard abstractions for writing efficient non-blocking code.[5][6]
Much research has also been done in providing basicdata structuressuch asstacks,queues,sets, andhash tables. These allow programs to easily exchange data between threads asynchronously.
Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include:
Several libraries internally use lock-free techniques,[7][8][9]but it is difficult to write lock-free code that is correct.[10][11][12][13]
Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order.
Optimizing compilers can aggressively re-arrange operations.
Even when they don't, many modern CPUs often re-arrange such operations (they have a "weakconsistency model"),
unless amemory barrieris used to tell the CPU not to reorder.C++11programmers can usestd::atomicin<atomic>,
andC11programmers can use<stdatomic.h>,
both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.[14]
Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput withstarvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.[15]This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high.
It was shown in the 1980s[16]that all algorithms can be implemented wait-free, and many transformations from serial code, calleduniversal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs.
Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[17]that the widely available atomicconditionalprimitives,CASandLL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads.
However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation needed]is greater.[clarification needed]
Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan andPetrank[18]presented a wait-free queue building on theCASprimitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott,[19]which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[20]provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank[21]provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures.
Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free.[22]Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce.
Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes
progress (for some sensible definition of progress).
All wait-free algorithms are lock-free.
In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm isnotlock-free. (If we suspend one thread that holds the lock, then the second thread will block.)
An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, ifNprocessors are trying to execute an operation, some of theNprocesses will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors.
In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion.
The decision about when to assist, abort or wait when an obstruction is met is the responsibility of acontention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations.
Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running.
Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation.[15]All lock-free algorithms are obstruction-free.
Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continuallylive-lockingis the task of a contention manager.
Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
|
https://en.wikipedia.org/wiki/Non-blocking_synchronization
|
Incomputing,preemptionis the act performed by an externalscheduler— without assistance or cooperation from the task — of temporarilyinterruptinganexecutingtask, with the intention of resuming it at a later time.[1]: 153This preemptive scheduler usually runs in the most privilegedprotection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of aprocessorare known ascontext switching.
In any given system design, some operations performed by the system may not be preemptable. This usually applies tokernelfunctions and serviceinterruptswhich, if not permitted torun to completion, would tend to producerace conditionsresulting indeadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense ofsystem responsiveness. The distinction betweenuser modeandkernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems havepreemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems areSolaris2.0/SunOS 5.0,[2]Windows NT,Linux kernel(2.5.4 and newer),[3]AIXand someBSDsystems (NetBSD, since version 5).
The termpreemptive multitaskingis used to distinguish amultitasking operating system, which permits preemption of tasks, from acooperative multitaskingsystem wherein processes or tasks must be explicitly programmed toyieldwhen they do not need system resources.
In simple terms: Preemptive multitasking involves the use of aninterrupt mechanismwhich suspends the currently executing process and invokes aschedulerto determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating systemkernelcan also initiate acontext switchto satisfy thescheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling.
The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known astime-shared scheduling, ortime-sharing.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-usercontrol systems(like those inrobotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
The period of time for which a process is allowed to run in a preemptive multitasking system is generally called thetime sliceorquantum.[1]: 158The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler itself will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input.
Aninterruptis scheduled to allow theoperating systemkernelto switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system.
Today, nearly all operating systems support preemptive multitasking, including the current versions ofWindows,macOS,Linux(includingAndroid),iOSandiPadOS.
An early microcomputer operating system providing preemptive multitasking wasMicroware'sOS-9, available for computers based on theMotorola 6809, including home computers such as theTRS-80 Color Computer 2when configured with disk drives,[4]with the operating system supplied by Tandy as an upgrade.[5]Sinclair QDOS[6]:18andAmigaOSon theAmigawere also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran onMotorola 68000-familymicroprocessorswithout memory management. Amiga OS useddynamic loadingof relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space.
Early operating systems forIBM PC compatiblessuch asMS-DOSandPC DOS, did not support multitasking at all, however alternative operating systems such asMP/M-86(1981) andConcurrent CP/M-86did support preemptive multitasking. OtherUnix-likesystems includingMINIXandCoherentprovided preemptive multitasking on 1980s-era personal computers.
LaterMS-DOScompatible systems natively supporting preemptive multitasking/multithreading includeConcurrent DOS,Multiuser DOS,Novell DOS(later calledCaldera OpenDOSandDR-DOS7.02 and higher). SinceConcurrent DOS 386, they could also run multiple DOS programs concurrently invirtual DOS machines.
The earliest version of Windows to support a limited form of preemptive multitasking wasWindows/386 2.0, which used theIntel 80386'sVirtual 8086 modeto run DOS applications invirtual 8086 machines, commonly known as "DOS boxes", which could be preempted. InWindows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility.[7]In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space.
Preemptive multitasking has always been supported byWindows NT(all versions),OS/2(native applications),UnixandUnix-likesystems (such asLinux,BSDandmacOS),VMS,OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets.
Early versions of theclassic Mac OSdid not support multitasking at all, with cooperative multitasking becoming available viaMultiFinderinSystem Software 5and then standard inSystem 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist inMac OS 9, although in a limited sense[8]), these were abandoned in favor ofMac OS X (now called macOS)that, as a hybrid of the old Mac System style andNeXTSTEP, is an operating system based on theMachkernel and derived in part fromBSD, which had always provided Unix-like preemptive multitasking.
|
https://en.wikipedia.org/wiki/Pre-emptive_multitasking
|
In mostUnixandUnix-like operating systems, theps(process status) program displays the currently-runningprocesses. The related Unix utilitytopprovides a real-time view of the running processes.
KolibriOSincludes an implementation of thepscommand.[1]Thepscommand has also been ported to theIBM ioperating system.[2]InWindows PowerShell,psis a predefinedcommand aliasfor theGet-Processcmdlet, which essentially serves the same purpose.
Users canpipelinepswith other commands, such aslessto view the process status output one page at a time:
Users can also utilize thepscommand in conjunction with thegrepcommand (see thepgrepandpkillcommands) to find information about a single process, such as its id:
The use ofpgrepsimplifies the syntax and avoids potential race conditions:
To see every process running as root in user format:
* = Often abbreviated
pshas many options. Onoperating systemsthat support theSUSandPOSIXstandards,pscommonly runs with the options-ef, where "-e" selectsevery process and "-f" chooses the "full" output format. Another common option on these systems is-l, which specifies the "long" output format.
Most systems derived fromBSDfail to accept the SUS and POSIX standard options because of historical conflicts. (For example, the "e" or "-e" option will displayenvironment variables.) On such systems,pscommonly runs with the non-standard optionsaux, where "a" lists all processes on aterminal, including those of other users, "x" lists all processes withoutcontrolling terminalsand "u" adds a column for the controlling user for each process. For maximum compatibility, there is no "-" in front of the "aux". "ps auxww" provides complete information about the process, including all parameters.
|
https://en.wikipedia.org/wiki/Ps_(Unix)
|
TheEhrenfest model(ordog–flea model) ofdiffusionwas proposed byTatianaandPaul Ehrenfestto explain thesecond law of thermodynamics.[1][2]The model considersNparticles in two containers. Particles independently change container at a rateλ. IfX(t) =iis defined to be the number of particles in one container at timet, then it is abirth–death processwithtransition rates
and equilibrium distributionπi=2−N(Ni){\displaystyle \pi _{i}=2^{-N}{\tbinom {N}{i}}}.
Mark Kacproved in 1947 that if the initial system state is not equilibrium, then theentropy, given by
is monotonically increasing (H-theorem). This is a consequence of the convergence to the equilibrium distribution.
Consider that at the beginning all the particles are in one of the containers. It is expected that over time the number of particles in this container will approachN/2{\displaystyle N/2}and stabilize near that state (containers will have approximately the same number of particles). However from mathematical point of view, going back to the initial state is possible (even almost sure). From mean recurrence theorem follows that even the expected time to going back to the initial state is finite, and it is2N{\displaystyle 2^{N}}. UsingStirling's approximationone finds that if we start at equilibrium (equal number of particles in the containers), the expected time to return to equilibrium is asymptotically equal toπN/2{\displaystyle \textstyle {\sqrt {\pi N/2}}}. If we assume that particles change containers at rate one in a second, in the particular case ofN=100{\displaystyle N=100}particles, starting at equilibrium the return to equilibrium is expected to occur in13{\displaystyle 13}seconds, while starting at configuration100{\displaystyle 100}in one of the containers,0{\displaystyle 0}at the other, the return to that state is expected to take4⋅1022{\displaystyle 4\cdot 10^{22}}years. This supposes that while theoretically sure, recurrence to the initial highly disproportionate state is unlikely to be observed.
|
https://en.wikipedia.org/wiki/Ehrenfest_model
|
Theerlang(symbolE[1]) is adimensionless unitthat is used intelephonyas a measure ofoffered loador carried load on service-providing elements such as telephone circuits or telephone switching equipment. A singlecord circuithas the capacity to be used for 60 minutes in one hour. Full utilization of that capacity, 60 minutes of traffic, constitutes 1 erlang.[2]
Carried traffic in erlangs is the average number of concurrent calls measured over a given period (often one hour), while offered traffic is the traffic that would be carried if all call-attempts succeeded. How much offered traffic is carried in practice will depend on what happens to unanswered calls when all servers are busy.
TheCCITTnamed the international unit of telephone traffic the erlang in 1946 in honor ofAgner Krarup Erlang.[3][4]In Erlang's analysis of efficient telephone line usage, he derived the formulae for two important cases, Erlang-B and Erlang-C, which became foundational results inteletraffic engineeringandqueueing theory. His results, which are still used today, relate quality of service to the number of available servers. Both formulae take offered load as one of their main inputs (in erlangs), which is often expressed as call arrival rate times average call length.
A distinguishing assumption behind the Erlang B formula is that there is no queue, so that if all service elements are already in use then a newly arriving call will be blocked and subsequently lost. The formula gives the probability of this occurring. In contrast, the Erlang C formula provides for the possibility of an unlimited queue and it gives the probability that a new call will need to wait in the queue due to all servers being in use. Erlang's formulae apply quite widely, but they may fail when congestion is especially high causing unsuccessful traffic to repeatedly retry. One way of accounting for retries when no queue is available is the Extended Erlang B method.
When used to representcarried traffic, a value (which can be a non-integer such as 43.5) followed by "erlangs" represents the average number of concurrent calls carried by the circuits (or other service-providing elements), where that average is calculated over some reasonable period of time. The period over which the average is calculated is often one hour, but shorter periods (e.g., 15 minutes) may be used where it is known that there are short spurts of demand and a traffic measurement is desired that does not mask these spurts.
One erlang of carried traffic refers to a single resource being in continuous use, or two channels each being in use fifty percent of the time, and so on. For example, if an office has two telephone operators who are both busy all the time, that would represent two erlangs (2 E) of traffic; or a radio channel that is occupied continuously during the period of interest (e.g. one hour) is said to have a load of 1 erlang.
When used to describeoffered traffic, a value followed by "erlangs" represents the average number of concurrent calls that would have been carried if there were an unlimited number of circuits (that is, if the call-attempts that were made when all circuits were in use had not been rejected). The relationship between offered traffic and carried traffic depends on the design of the system and user behavior. Three common models are (a) callers whose call-attempts are rejected go away and never come back, (b) callers whose call-attempts are rejected try again within a fairly short space of time, and (c) the system allows users to wait in queue until a circuit becomes available.
A third measurement of traffic isinstantaneous traffic, expressed as a certain number of erlangs, meaning the exact number of calls taking place at a point in time. In this case, the number is a non-negative integer. Traffic-level-recording devices, such as moving-pen recorders, plot instantaneous traffic.
The concepts and mathematics introduced byAgner Krarup Erlanghave broad applicability beyond telephony. They apply wherever users arrive more or less at random to receive exclusive service from any one of a group of service-providing elements without prior reservation, for example, where the service-providing elements are ticket-sales windows, toilets on an airplane, or motel rooms. (Erlang's models do not apply where the service-providing elements are shared between several concurrent users or different amounts of service are consumed by different users, for instance, on circuits carrying data traffic.)
The goal of Erlang's traffic theory is to determine exactly how many service-providing elements should be provided in order to satisfy users, without wasteful over-provisioning. To do this, a target is set for thegrade of service(GoS) orquality of service(QoS). For example, in a system where there is no queuing, the GoS may be that no more than 1 call in 100 is blocked (i.e., rejected) due to all circuits being in use (a GoS of 0.01), which becomes the target probability of call blocking,Pb, when using the Erlang B formula.
There are several resulting formulae, includingErlang B,Erlang Cand the relatedEngset formula, based on different models of user behavior and system operation. These may each be derived by means of a special case ofcontinuous-time Markov processesknown as abirth–death process. The more recentExtended Erlang Bmethod provides a further traffic solution that draws on Erlang's results.
Offered traffic (in erlangs) is related to thecall arrival rate,λ, and theaverage call-holding time(the average time of a phone call),h, by:
provided thathandλare expressed using the same units of time (seconds and calls per second, or minutes and calls per minute).
The practical measurement of traffic is typically based on continuous observations over several days or weeks, during which the instantaneous traffic is recorded at regular, short intervals (such as every few seconds). These measurements are then used to calculate a single result, most commonly the busy-hour traffic (in erlangs). This is the average number of concurrent calls during a given one-hour period of the day, where that period is selected to give the highest result. (This result is called the time-consistent busy-hour traffic). An alternative is to calculate a busy-hour traffic value separately for each day (which may correspond to slightly different times each day) and take the average of these values. This generally gives a slightly higher value than the time-consistent busy-hour value.
Where the existing busy-hour carried traffic,Ec, is measured on an already overloaded system, with a significant level of blocking, it is necessary to take account of the blocked calls in estimating the busy-hour offered trafficEo(which is the traffic value to be used in the Erlang formulae). The offered traffic can be estimated byEo=Ec/(1 −Pb). For this purpose, where the system includes a means of counting blocked calls and successful calls,Pbcan be estimated directly from the proportion of calls that are blocked. Failing that,Pbcan be estimated by usingEcin place ofEoin the Erlang formula and the resulting estimate ofPbcan then be used inEo=Ec/(1 −Pb)to provide a first estimate ofEo.
Another method of estimatingEoin an overloaded system is to measure the busy-hour call arrival rate,λ(counting successful calls and blocked calls), and the average call-holding time (for successful calls),h, and then estimateEousing the formulaE=λh.
For a situation where the traffic to be handled is completely new traffic, the only choice is to try to model expected user behavior. For example, one could estimate active user population,N, expected level of use,U(number of calls/transactions per user per day), busy-hour concentration factor,C(proportion of daily activity that will fall in the busy hour), and average holding time/service time,h(expressed in minutes). A projection of busy-hour offered traffic would then beEo=NUC/60herlangs. (The division by 60 translates the busy-hour call/transaction arrival rate into a per-minute value, to match the units in whichhis expressed.)
TheErlang B formula(orErlang-Bwith a hyphen), also known as theErlang loss formula, is a formula for theblocking probabilitythat describes the probability of call losses for a group of identical parallel resources (telephone lines, circuits, traffic channels, or equivalent), sometimes referred to as anM/M/c/c queue.[5]It is, for example, used to dimension a telephone network's links. The formula was derived byAgner Krarup Erlangand is not limited to telephone networks, since it describes a probability in a queuing system (albeit a special case with a number of servers but no queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales.
The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following aPoisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions.
The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic toNservers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and doesnotdepend on the number of active sources. The total number of sources is assumed to be infinite.
The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return.
The formula provides the GoS (grade of service) which is the probabilityPbthat a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy:B(E,m)whereEis the total offered traffic in erlang, offered tomidentical parallel resources (servers, communication channels, traffic lanes).
where:
Theerlangis a dimensionless load unit calculated as the mean arrival rate,λ, multiplied by the mean call holding time,h. The unit has to be dimensionless forLittle's Lawto be dimensionally sane.
This may be expressed recursively[6]as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula:
Typically, instead ofB(E,m) the inverse 1/B(E,m) is calculated in numerical computation in order to ensurenumerical stability:
The recursive form is derivable from the non-recursive form by repeated substitution.[7]
or a Python version:
The Erlang B formula is decreasing andconvexinm.[8]It requires that call arrivals can be modeled by aPoisson process, which is not always a good match, but is valid for any statistical distribution of call holding times with a finite mean.
It applies to traffic transmission systems that do not buffer traffic.
More modern examples compared toPOTSwhere Erlang B is still applicable, areoptical burst switching(OBS) and several current approaches tooptical packet switching(OPS).
Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale.
Extended Erlang Bdiffers from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is aniterative calculationrather than a formula and adds an extra parameter, the recall factorRf{\displaystyle R_{\text{f}}}, which defines the recall attempts.[9]
The steps in the process are as follows.[10]It starts at iterationk=0{\displaystyle k=0}with a known initial baseline level of trafficE0{\displaystyle E_{0}}, which is successively adjusted to calculate a sequence of new offered traffic valuesEk+1{\displaystyle E_{k+1}}, each of which accounts for the recalls arising from the previously calculated offered trafficEk{\displaystyle E_{k}}.
Once a satisfactory value ofE{\displaystyle E}has been found, the blocking probabilityPb{\displaystyle P_{\text{b}}}and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries.
TheErlang C formulaexpresses the probability that an arriving customer will need to queue (as opposed to immediately being served).[11]Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic ofE{\displaystyle E}erlangs tom{\displaystyle m}servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff acall centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level.
where:
It is assumed that the call arrivals can be modeled by aPoisson processand that call holding times are described by anexponential distribution, therefore the Erlang C formula follows from the assumptions of theM/M/c queuemodel.
When Erlang developed the Erlang-B and Erlang-C traffic equations, they were developed on a set of assumptions. These assumptions are accurate under most conditions; however in the event of extremely high traffic congestion, Erlang's equations fail to accurately predict the correct number of circuits required because of re-entrant traffic. This is termed ahigh-loss system, where congestion breeds further congestion at peak times. In such cases, it is first necessary for many additional circuits to be made available so that the high loss can be alleviated. Once this action has been taken, congestion will return to reasonable levels and Erlang's equations can then be used to determine how exactly many circuits are really required.[12]
An example of an instance which would cause such a High Loss System to develop would be if a TV-based advertisement were to announce a particular telephone number to call at a specific time. In this case, a large number of people would simultaneously phone the number provided. If the service provider had not catered for this sudden peak demand, extreme traffic congestion will develop and Erlang's equations cannot be used.[12]
|
https://en.wikipedia.org/wiki/Erlang_unit
|
Line managementrefers to themanagementofemployeeswho are directly involved in the production or delivery ofproducts,goodsand/orservices. As the interface between anorganisationand its front-lineworkforce, line management represents the lowest level of management within an organisationalhierarchy(as distinct fromtop/executive/senior managementandmiddle management).[1]
Aline manageris an employee who directly manages other employees and day-to-day operations while reporting to a higher-ranking manager. In some retail businesses, they may have titles such ashead cashierordepartment supervisor.[2][3][4][5]Related job titles aresupervisor,section leader,foreperson,office managerandteam leader.[1]They are charged with directing employees and controlling that the corporate objectives in a specific functional area orline of businessare met.[1]
Despite the name, line managers are usually considered as part of the organization's workforce and not part of its management class.
Line managers are tasked with implementing organizational policies through direct supervision of staff and ensuring alignment with business objectives and core values.
Key responsibilities include:
Typical duties may involve:
Line management also plays a role in facilitating organizational change, often in collaboration with senior management.[6]Additionally, line managers are increasingly involved in functions traditionally managed by specialized departments, such ashuman resources,finance, andrisk management. In many organizations, line managers are directly responsible for operational risk and the implementation of HR policies.[7][8][9]
This business term article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Line_management
|
Project production management(PPM)[1][2]is the application ofoperations management[2][3]to the delivery of capital projects. The PPM framework is based on aprojectas aproduction systemview,[1][2][3]in which a project transforms inputs (raw materials, information, labor, plant & machinery) into outputs (goods and services).
The knowledge that forms the basis of PPM originated in the discipline ofindustrial engineeringduring theIndustrial Revolution. During this time, industrial engineering matured and then found application in many areas such as military planning and logistics for both the First and Second World Wars and manufacturing systems. As a coherent body of knowledge began to form, industrial engineering evolved into various scientific disciplines includingoperations research, operations management andqueueing theory, amongst other areas of focus. Project Production Management (PPM) is the application of this body of knowledge to the delivery of capital projects.
Project management, as defined by theProject Management Institute,[1][2]specifically excludesoperations managementfrom its body of knowledge,[3]on the basis that projects are temporary endeavors with a beginning and an end, whereas operations refer to activities that are either ongoing or repetitive. However, by looking at a large capital project as a production system, such as what is encountered in construction,[4]it is possible to apply the theory and associated technical frameworks from operations research, industrial engineering and queuing theory to optimize, plan, control and improve project performance.
For example, Project Production Management applies tools and techniques typically used in manufacturing management, such as described byPhilip M. Morsein,[1]or inFactory Physics[2][5]to assess the impact ofvariabilityandinventoryon project performance. Although any variability in a production system degrades its performance, by understanding which variability is detrimental to the business and which is beneficial, steps can be implemented to reduce detrimental variability. After mitigation steps are put in place, the impact of any residual variability can be addressed by allocating buffers at select points in the project production system – a combination of capacity,inventoryand time.
Scientific and Engineering disciplines have contributed to many mathematical methods for the design and planning inproject planningand scheduling, most notablylinearanddynamicprogramming yielding techniques such as thecritical path method(CPM) and theprogram evaluation and review technique(PERT). The application of engineering disciplines, particularly the areas of operations research,industrial engineeringand queueing theory have found much application in the fields ofmanufacturingand factory production systems. Factory Physics is an example of where these scientific principles are described as forming a framework for manufacturing and production management. Just as Factory Physics is the application of scientific principles to construct a framework for manufacturing and production management, Project Production Management is the application of the very same operations principles to the activities in a project, covering an area that has been conventionally out of scope for project management.[3]
Modernproject managementtheory and techniques started withFrederick Taylorand Taylorism/scientific managementat the beginning of the 20th century, with the advent of mass manufacturing. It was refined further in the 1950s with techniques such ascritical path method(CPM)[1][2]andprogram evaluation and review technique(PERT).[5][6]Use of CPM and PERT became more common as the computer revolution progressed. As the field of project management continued to grow, the role of the project manager was created and certifying organizations such as the Project Management Institute (PMI) emerged. Modern project management has evolved into a broad variety of knowledge areas described in the Guide to the Project Management Body of Knowledge (PMBOK).[3]
Operations management[7][8][9][10](related to the fields ofproduction management,operations researchandindustrial engineering) is a field of science that emerged from the modern manufacturing industry and focuses on modeling and controlling actual work processes. The practice is based upon defining and controlling production systems, which typically consist of a series of inputs, transformational activities,inventoryand outputs. Over the last 50 years, project management and operations management have been considered separate fields of study and practice.
PPM applies the theory and results of the various disciplines known asoperations management, operations research,queueing theoryand industrial engineering to the management and execution of projects. By viewing a project as aproduction system, the delivery of capital projects can be analyzed for the impact ofvariability.The effects of variability can be summarized by VUT equation (specificallyKingman's formula for G/G/1 queue). By using a combination ofbuffers–capacity, inventory and time – the impact of variability to project execution performance can be minimized.
A set of key results used to analyze and optimize the work in projects were originally articulated byPhilip Morse, considered the father of operations research in the U.S. and summarized in his seminal volume.[8]In introducing its framework formanufacturingmanagement,Factory Physicssummarizes these results:
There are key mathematical models that describe the relationships between buffers and variability.Little's law[11]– named after academicJohn Little– describes the relationship between throughput, cycle time and work-in-process (WIP) or inventory. The Cycle Time Formula[11]summarizes how much time a set of tasks at a particular point in a project take to execute. Kingman's formula, also known as the VUT equation[11]– summarizing the impact of variability.
The following academic journals publish papers pertaining to Operations Management issues:
|
https://en.wikipedia.org/wiki/Project_production_management
|
Queue areasare places in which people queue (first-come, first-served) for goods or services. Such a group of people is known as aqueue(Britishusage) orline(Americanusage), and the people are said to be waiting or standingin a queueorin line, respectively. (In theNew York Cityarea, the phraseon lineis often used in place ofin line.)[1]Occasionally, both the British and American terms are combined to form the term "queue line".[2][3]
Examples include checking outgroceriesor other goods that have been collected in aself serviceshop, in a shop without self-service, at anATM, at a ticket desk, acity bus, or in ataxi stand.
Queueing[4]is a phenomenon in a number of fields, and has been extensively analysed in the study ofqueueing theory. Ineconomics, queueing is seen as one way torationscarcegoods and services.
The first written description of people standing in line is found in an 1837 book,The French Revolution: A HistorybyThomas Carlyle.[5]Carlyle described what he thought was a strange sight: people standing in an orderly line to buy bread from bakers around Paris.[5]
Queues can be found in railway stations to book tickets, at bus stops for boarding and at temples.[6][7][8]
Queues are generally found at transportation terminals wheresecurityscreenings are conducted.
Large stores and supermarkets may have dozens of separate queues, but this can cause frustration, as different lines tend to be handled at different speeds; some people are served quickly, while others may wait for longer periods of time. Sometimes two people who are together split up and each waits in a different line; once it is determined which line is faster, the one in the slower line joins the other. Another arrangement is for everyone to wait in a single line;[9]a person leaves the line each time a service point opens up. This is a common setup inbanksandpost offices.
Organized queue areas are commonly found atamusement parks. Each ride can accommodate a fixed number of guests that can be served at any given time (which is referred to as the ride’s operational capacity), so there has to be some control over additional guests who are waiting. This led to the development of formalized queue areas—areas in which the lines of people waiting to board the rides are organized by railings, and may be given shelter from the elements with a roof over their heads, inside a climate-controlled building or with fans and misting devices. In some amusement parks –Disney theme parksbeing a prime example – queue areas can be elaborately decorated, with holding areas fosteringanticipation, thus shortening the perceived wait for people in the queue by giving them something interesting to look at as they wait, or the perception that they have arrived at the threshold of the attraction.
When designing queues, planners attempt to make the wait as pleasant and as simple as possible.[citation needed][10]They employ several strategies to achieve this, including:
People experience "occupied" time as shorter than "unoccupied" time, and generally overestimate the amount of time waited by around 36%.[11]
The technique of giving people an activity to distract them from a wait has been used to reduce complaints of delays at:[11]
Other techniques to reduce queueing anxiety include:[11]
Cutting in line, also known as queue-jumping, can generate a strong negative response, depending on the local cultural norms.
Physical queueing is sometimes replaced by virtual queueing. In awaiting roomthere may be a system whereby the queuer asks and remembers where their place is in the queue, or reports to a desk and signs in, or takes a ticket with a number from a machine. These queues typically are found atdoctors' offices,hospitals,town halls,social securityoffices,labor exchanges, theDepartment of Motor Vehicles, the immigration departments, freeinternet accessin the state or council libraries,banksorpost officesand call centres. Especially in theUnited Kingdom, tickets are taken to form a virtual queue at delicatessens and children's shoe shops. In some countries such asSweden, virtual queues are also common in shops andrailway stations. A display sometimes shows the number that was last called for service.
Restaurantshave come to employ virtual queueing techniques with the availability of application-specific pagers, which alert those waiting that they should report to the host to be seated. Another option used at restaurants is to assign customers a confirmed return time, basically a reservation issued on arrival.
Virtual queueing apps are available that allow the customers to view the virtual queue status of a business and they can take virtual queue numbers remotely. The app can be used to get updates of the virtual queue status that the customer is in.
A substitute or alternative activity may be provided for people to participate in while waiting to be called, which reduces the perceived waiting time and the probability that the customer will abort their visit. For example, a busy restaurant might seat waiting customers a bar. An outdoor attraction with long virtual queues might have a sidemarqueeselling merchandise or food. The alternate activity may provide the organisation with an opportunity to generate additional revenue from the waiting customers.[12]
All of the above methods, however, suffer from the same drawback: the person arrives at the location only to find out that they need to wait. Recently, queues atDMVs,[13]colleges, restaurants,[14]healthcare institutions,[15]government offices[14]and elsewhere have begun to be replaced by mobile queues or queue-ahead, whereby the person queuing uses their phone, the internet, a kiosk or another method to enter a virtual queue, optionally prior to arrival, is free to roam during the wait, and then gets paged at their mobile phone when their turn approaches. This has the advantage of allowing users to find out the wait forecast and get in the queue before arriving, roaming freely and then timing their arrival to the availability of service. This has been shown to extend the patience of those in the queue and reduce no-shows.[14]
|
https://en.wikipedia.org/wiki/Queue_area
|
Intelecommunicationsandcomputer engineering, thequeuing delayis the time a job waits in aqueueuntil it can be executed. It is a key component ofnetwork delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address.[1]
This term is most often used in reference torouters. Whenpacketsarrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in aburst transmission) the router puts them into the queue (also called thebuffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay.[2]
As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced.[3]This formula can be used when no packets are dropped from the queue.
The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router that receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.
When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimalnetwork congestiondelays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instead of cars). This result is both hard to model mathematically and quite counterintuitive to people who lack experience with mathematics or real networks. Failing to drop packets, choosing instead to buffer an ever-increasing number of them, producesbufferbloat.
InKendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system. Kendall's notation should be used to calculate the queuing delay when packets are dropped from the queue. The M/M/1/K queuing model is the most basic and important queuing model for network analysis.[4]
This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 2022-01-22.(in support ofMIL-STD-188).
|
https://en.wikipedia.org/wiki/Queueing_delay
|
Queue areasare places in which people queue (first-come, first-served) for goods or services. Such a group of people is known as aqueue(Britishusage) orline(Americanusage), and the people are said to be waiting or standingin a queueorin line, respectively. (In theNew York Cityarea, the phraseon lineis often used in place ofin line.)[1]Occasionally, both the British and American terms are combined to form the term "queue line".[2][3]
Examples include checking outgroceriesor other goods that have been collected in aself serviceshop, in a shop without self-service, at anATM, at a ticket desk, acity bus, or in ataxi stand.
Queueing[4]is a phenomenon in a number of fields, and has been extensively analysed in the study ofqueueing theory. Ineconomics, queueing is seen as one way torationscarcegoods and services.
The first written description of people standing in line is found in an 1837 book,The French Revolution: A HistorybyThomas Carlyle.[5]Carlyle described what he thought was a strange sight: people standing in an orderly line to buy bread from bakers around Paris.[5]
Queues can be found in railway stations to book tickets, at bus stops for boarding and at temples.[6][7][8]
Queues are generally found at transportation terminals wheresecurityscreenings are conducted.
Large stores and supermarkets may have dozens of separate queues, but this can cause frustration, as different lines tend to be handled at different speeds; some people are served quickly, while others may wait for longer periods of time. Sometimes two people who are together split up and each waits in a different line; once it is determined which line is faster, the one in the slower line joins the other. Another arrangement is for everyone to wait in a single line;[9]a person leaves the line each time a service point opens up. This is a common setup inbanksandpost offices.
Organized queue areas are commonly found atamusement parks. Each ride can accommodate a fixed number of guests that can be served at any given time (which is referred to as the ride’s operational capacity), so there has to be some control over additional guests who are waiting. This led to the development of formalized queue areas—areas in which the lines of people waiting to board the rides are organized by railings, and may be given shelter from the elements with a roof over their heads, inside a climate-controlled building or with fans and misting devices. In some amusement parks –Disney theme parksbeing a prime example – queue areas can be elaborately decorated, with holding areas fosteringanticipation, thus shortening the perceived wait for people in the queue by giving them something interesting to look at as they wait, or the perception that they have arrived at the threshold of the attraction.
When designing queues, planners attempt to make the wait as pleasant and as simple as possible.[citation needed][10]They employ several strategies to achieve this, including:
People experience "occupied" time as shorter than "unoccupied" time, and generally overestimate the amount of time waited by around 36%.[11]
The technique of giving people an activity to distract them from a wait has been used to reduce complaints of delays at:[11]
Other techniques to reduce queueing anxiety include:[11]
Cutting in line, also known as queue-jumping, can generate a strong negative response, depending on the local cultural norms.
Physical queueing is sometimes replaced by virtual queueing. In awaiting roomthere may be a system whereby the queuer asks and remembers where their place is in the queue, or reports to a desk and signs in, or takes a ticket with a number from a machine. These queues typically are found atdoctors' offices,hospitals,town halls,social securityoffices,labor exchanges, theDepartment of Motor Vehicles, the immigration departments, freeinternet accessin the state or council libraries,banksorpost officesand call centres. Especially in theUnited Kingdom, tickets are taken to form a virtual queue at delicatessens and children's shoe shops. In some countries such asSweden, virtual queues are also common in shops andrailway stations. A display sometimes shows the number that was last called for service.
Restaurantshave come to employ virtual queueing techniques with the availability of application-specific pagers, which alert those waiting that they should report to the host to be seated. Another option used at restaurants is to assign customers a confirmed return time, basically a reservation issued on arrival.
Virtual queueing apps are available that allow the customers to view the virtual queue status of a business and they can take virtual queue numbers remotely. The app can be used to get updates of the virtual queue status that the customer is in.
A substitute or alternative activity may be provided for people to participate in while waiting to be called, which reduces the perceived waiting time and the probability that the customer will abort their visit. For example, a busy restaurant might seat waiting customers a bar. An outdoor attraction with long virtual queues might have a sidemarqueeselling merchandise or food. The alternate activity may provide the organisation with an opportunity to generate additional revenue from the waiting customers.[12]
All of the above methods, however, suffer from the same drawback: the person arrives at the location only to find out that they need to wait. Recently, queues atDMVs,[13]colleges, restaurants,[14]healthcare institutions,[15]government offices[14]and elsewhere have begun to be replaced by mobile queues or queue-ahead, whereby the person queuing uses their phone, the internet, a kiosk or another method to enter a virtual queue, optionally prior to arrival, is free to roam during the wait, and then gets paged at their mobile phone when their turn approaches. This has the advantage of allowing users to find out the wait forecast and get in the queue before arriving, roaming freely and then timing their arrival to the availability of service. This has been shown to extend the patience of those in the queue and reduce no-shows.[14]
|
https://en.wikipedia.org/wiki/Queue_management_system
|
TheQueuing Rule of Thumb (QROT)is a mathematical formula known as the queuing constraint equation when it is used to find an approximation of servers required to service aqueue. The formula is written as aninequalityrelating the number of servers (s), total number of service requestors (N), service time (r), and the maximum time to empty the queue (T):
QROT serves as a rough heuristic to address queue problems.[2]Compared to standard queuing formulas, it is simple enough to compute the necessary number of servers without involvingprobabilityorqueueing theory. Therule of thumbis therefore more practical to use in many situations.[1]
A derivation of the QROT formula follows. Thearrival rateis the ratio of the total number of customersNand the maximum time needed to finish the queueT.
Theservice rateis the reciprocal of service timer.
It is convenient to consider the ratio of the arrival rate and the service rate.
Assumingsservers, theutilizationof the queuing system must not be larger than 1.
Combining the first three equations givesρ=λμ=NrT{\displaystyle \rho ={\frac {\lambda }{\mu }}={\frac {Nr}{T}}}. Combining this and the fourth equation yieldsU=ρs=NrTs<1{\displaystyle U={\frac {\rho }{s}}={\frac {Nr}{Ts}}<1}.
Simplifying, the formula for the Queuing Rule of Thumb iss>NrT{\displaystyle s>{\frac {Nr}{T}}}.
The Queuing Rule of Thumb assistsqueue managementto resolve queue problems by relating the number of servers, the total number of customers, the service time, and the maximum time needed to finish the queue. To make a queuing system more efficient, these values can be adjusted with regards to the rule of thumb.[3]
The following examples illustrate how the rule may be used.
Conference lunches are usually self-service. Each serving table has 2 sides where people can pick up their food. If each of 1,000 attendees needs 45 seconds to do so, how many serving tables must be provided so that lunch can be served in an hour?[2]
Solution:Givenr= 45,N= 1000,T= 3600, we use the rule of thumb to gets:s>NrT⟹s>1000×453600⟹s>12.5{\displaystyle s>{\frac {Nr}{T}}\Longrightarrow s>{\frac {1000\times 45}{3600}}\Longrightarrow s>12.5}. There are two sides of the table that can be used. So the number of tables needed is12.52=6.25{\displaystyle {\frac {12.5}{2}}=6.25}. We round this up to a whole number since the number of servers must be discrete. Thus, 7 serving tables must be provided.[2]
A school of 10,000 students must set certain days for student registration. One working day is 8 hours. Each student needs about 36 seconds to be registered. How many days are needed to register all students?[2]
Solution:Givens= 1,N= 10,000,r= 36, the rule of thumb yieldsT:s>NrT⟹T>Nrs⟹T>10,000×361⟹T>360,000{\displaystyle s>{\frac {Nr}{T}}\Longrightarrow T>{\frac {Nr}{s}}\Longrightarrow T>{\frac {10,000\times 36}{1}}\Longrightarrow T>360,000}. Given the work hours for a day is 8 hours (28,800 seconds), the number of registration days needed is⌈360,00028,800⌉=13{\displaystyle \left\lceil {\frac {360,000}{28,800}}\right\rceil =13}days.[2]
During the peak hour of the morning about 4500 cars drop off their children at an elementary school. Each drop-off requires about 60 seconds. Each car requires about 6 meters to stop and maneuver. How much space is needed for the minimum drop off line?[2]
Solution:GivenN= 4500,T= 60,r= 1, the rule of thumb yieldss:s>NrT⟹s>4500×160⟹s>75{\displaystyle s>{\frac {Nr}{T}}\Longrightarrow s>{\frac {4500\times 1}{60}}\Longrightarrow s>75}. Given the space for each car is 6 meters, the line should be at least75×6=450{\displaystyle 75\times 6=450}meters.[2]
|
https://en.wikipedia.org/wiki/Queuing_Rule_of_Thumb
|
Random early detection(RED), also known asrandom early discardorrandom early drop, is aqueuing disciplinefor anetwork schedulersuited forcongestion avoidance.[1]
In the conventionaltail dropalgorithm, arouteror othernetwork componentbuffers as many packets as it can, and simply drops the ones it cannot buffer. If buffers are constantly full, the network iscongested. Tail drop distributes buffer space unfairly among traffic flows. Tail drop can also lead toTCP global synchronizationas allTCPconnections "hold back" simultaneously, and then step forward simultaneously. Networks become under-utilized and flooded—alternately, in waves.
RED addresses these issues by pre-emptively dropping packets before the buffer becomes completely full. It uses predictive models to decide which packets to drop. It was invented in the early 1990s bySally FloydandVan Jacobson.[2]
RED monitors the average queue size and drops (or marks when used in conjunction withECN) packets based on statisticalprobabilities. If the buffer is almost empty, then all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. When the buffer is full, the probability has reached 1 and all incoming packets are dropped.
RED is more fair than tail drop, in the sense that it does not possess a bias against bursty traffic that uses only a small portion of the bandwidth. The more a host transmits, the more likely it is that its packets are dropped as the probability of a host's packet being dropped is proportional to the amount of data it has in a queue. Early detection helps avoid TCP global synchronization.
According to Van Jacobson, "there are not one, but two bugs in classic RED."[3]Improvements to the algorithm were developed, and a draft paper[4]was prepared, but the paper was never published, and the improvements were not widely disseminated or implemented. There has been some work in trying to finish off the research and fix the bugs.[3]
Pure RED does not accommodatequality of service(QoS) differentiation.Weighted RED(WRED) and RED with In and Out (RIO)[5]provide early detection with QoS considerations.
In weighted RED you can have different probabilities for different priorities (IP precedence,DSCP) and/or queues.[6]
The adaptive RED or active RED (ARED) algorithm[7]infers whether to make RED more or less aggressive based on the observation of the average queue length. If the average queue length oscillates aroundminthreshold then early detection is too aggressive. On the other hand, if the average queue length oscillates aroundmaxthreshold then early detection is being too conservative. The algorithm changes the probability according to how aggressively it senses it has been discarding traffic.
See Srikant[8]for an in-depth account on these techniques and their analysis.
Robust random early detection (RRED) algorithm was proposed to improve the TCP throughput against Denial-of-Service (DoS) attacks, particularlyLow-rate Denial-of-Service(LDoS) attacks. Experiments have confirmed that the existing RED-like algorithms are notably vulnerable under Low-rate Denial-of-Service (LDoS) attacks due to the oscillating TCP queue size caused by the attacks.[9]RRED algorithm can significantly improve the performance of TCP under Low-rate Denial-of-Service attacks.[9]
|
https://en.wikipedia.org/wiki/Random_early_detection
|
Traffic congestionis a condition in transport that is characterized by slower speeds, longer trip times, and increased vehicularqueueing. Traffic congestion on urban road networks has increased substantially since the 1950s, resulting in many of the roads becoming obsolete.[2]When traffic demand is great enough that the interaction between vehicles slows the traffic stream, this results in congestion. While congestion is a possibility for anymode of transportation, this article will focus on automobile congestion on public roads. Mathematically, traffic is modeled as a flow through a fixed point on the route, analogously tofluid dynamics.
As demand approaches the capacity of a road (or of the intersections along the road), extreme traffic congestion sets in. When vehicles are fully stopped for periods of time, this is known as atraffic jam[3][4]or (informally) atraffic snarl-up[5][6]or atailback.[7]Drivers can become frustrated and engage inroad rage. Drivers and driver-focused road planning departments commonly propose to alleviate congestion by adding another lane to the road. This is ineffective: increasing road capacityinduces more demandfor driving.
Traffic congestion occurs when a volume of traffic generates demand for space greater than the available street capacity; this point is commonly termedsaturation. Several specific circumstances can cause or aggravate congestion; most of them reduce the capacity of a road at a given point or over a certain length, or increase the number of vehicles required for a given volume of people or goods. About half of U.S. traffic congestion is recurring, and is attributed to sheer volume of traffic; most of the rest is attributed to traffic incidents, road work and weather events.[10][11]In terms of traffic operation, rainfall reduces traffic capacity and operating speeds, thereby resulting in greater congestion and road network productivity loss.
Individual incidents such as crashes or even a single car braking heavily in a previously smooth flow may cause ripple effects, acascading failure, which then spread out and create a sustained traffic jam when, otherwise, the normal flow might have continued for some time longer.[12]
People often work and live in different parts of the city. Manyworkplacesare located in acentral business districtaway fromresidential areas, resulting in workerscommuting. According to a 2011 report published by theUnited States Census Bureau, a total of 132.3 million people in the United States commute between their work and residential areas daily.[13]
People may need to move about within the city to obtain goods and services, for instance to purchase goods or attend classes in a different part of the city.Brussels, aBelgiancity with a strong service economy, has one of the worst traffic congestion in the world, wasting 74 hours in traffic in 2014.
Congested roads can be seen as an example of thetragedy of the commons. Because roads in most places are free at the point of usage, there is little financial incentive for drivers not to over-use them, up to the point where traffic collapses into a jam, when demand becomes limited byopportunity cost.Privatization of highwaysandroad pricinghave both been proposed as measures that may reduce congestion through economic incentives and disincentives[citation needed]. Congestion can also happen due to non-recurring highway incidents, such as acrashorroadworks, which may reduce the road's capacity below normal levels.
EconomistAnthony Downsargues thatrush hourtraffic congestion is inevitable because of the benefits of having a relativelystandard work day[citation needed]. In acapitalisteconomy, goods can be allocated either by pricing (ability to pay) or by queueing (first-come first-served); congestion is an example of the latter. Instead of the traditional solution of making the "pipe" large enough to accommodate the total demand for peak-hour vehicle travel (a supply-side solution), either by widening roadways or increasing "flow pressure" viaautomated highway systems, Downs advocates greater use ofroad pricingto reduce congestion (a demand-side solution, effectively rationing demand), in turn putting the revenues generated therefrom intopublic transportationprojects.
A 2011 study inThe American Economic Reviewindicates that there may be a "fundamental law of road congestion." The researchers, from theUniversity of Torontoand theLondon School of Economics, analyzed data from the U.S. Highway Performance and Monitoring System for 1983, 1993 and 2003, as well as information on population, employment, geography, transit, and political factors. They determined that the number of vehicle-kilometers traveled (VKT) increases in direct proportion to the available lane-kilometers of roadways. The implication is that building new roads and widening existing ones only results in additional traffic that continues to rise until peak congestion returns to the previous level.[14][15]
Qualitative classification of traffic is often done in the form of a six-letter A–Flevel of service(LOS) scale defined in theHighway Capacity Manual, a US document used (or used as a basis for national guidelines) worldwide. While this system generally uses delay as the basis for its measurements, the particular measurements and statistical methods vary depending on the facility being described. For instance, while the percent time spent following a slower-moving vehicle figures into the LOS for a rural two-lane road, the LOS at an urban intersection incorporates such measurements as the number of drivers forced to wait through more than one signal cycle.[16]
Another classification schema of traffic congestion is associated with somecommon spatiotemporal features of traffic congestionfound in measured traffic data. Common spatiotemporal empirical features of traffic congestion are those features, which are qualitatively the same for different highways in different countries measured during years of traffic observations. Common features of traffic congestion are independent[clarification needed]onweather, road conditions and road infrastructure, vehicular technology, driver characteristics, day time, etc. Examples of common features of traffic congestion are the features [J] and [S] for, respectively, thewide moving jamandsynchronized flowtraffic phases found inBoris Kerner'sthree-phase traffic theory. The common features of traffic congestion can be reconstructed in space and time with the use of theASDA and FOTOmodels.
Some traffic engineers have attempted to apply the rules offluid dynamicsto traffic flow, likening it to the flow of a fluid in a pipe. Congestion simulations and real-time observations have shown that in heavy but free flowing traffic, jams can arise spontaneously, triggered by minor events ("butterfly effects"), such as an abrupt steering maneuver by a single motorist. Traffic scientists liken such a situation to the sudden freezing ofsupercooled fluid.[20]
Because of the poor correlation of theoretical models to actual observed traffic flows, transportation planners and highway engineers attempt toforecast traffic flowusing empirical models. Their working traffic models typically use a combination of macro-, micro- and mesoscopic features, and may add matrixentropyeffects, by "platooning" groups of vehicles and by randomizing the flow patterns within individual segments of the network. These models are then typically calibrated by measuring actual traffic flows on the links in the network, and the baseline flows are adjusted accordingly.
A team of MIT mathematicians has developed a model that describes the formation of "phantom jams", in which small disturbances (a driver hitting the brake too hard, or getting too close to another car) in heavy traffic can become amplified into a full-blown, self-sustaining traffic jam. Key to the study is the realization that the mathematics of such jams, which the researchers call "jamitons", are strikingly similar to the equations that describe detonation waves produced by explosions, says Aslan Kasimov, lecturer in MIT's Department of Mathematics. That discovery enabled the team to solve traffic-jam equations that were first theorized in the 1950s.[21]
Traffic congestion has a number of negative effects:
Road rageis aggressive or angry behavior by a driver of an automobile or other motor vehicle. Such behavior might include rude gestures, verbal insults, deliberately driving in an unsafe or threatening manner, or making threats. Road rage can lead to altercations, assaults, and collisions which result in injuries and even deaths. It can be thought of as an extreme case ofaggressive driving.
The term originated in the United States in 1987–1988 (specifically, from Newscasters atKTLA, a local television station), when a rash of freeway shootings occurred on the 405, 110 and 10 freeways in Los Angeles, California. These shooting sprees even spawned a response from the AAA Motor Club to its members on how to respond to drivers with road rage or aggressive maneuvers and gestures.[22]
Congestion has the benefit of encouraging motorists to retime their trips so that expensive road space is in full use for more hours per day. It may also encourage travellers to pick alternate modes with a lower environmental impact, such as public transport or bicycles.[32]
It has been argued that traffic congestion, by reducing road speeds in cities, could reduce the frequency and severity of road crashes.[33]More recent research suggests that a U-shaped curve exists between the number of accidents and the flow of traffic, implying that more accidents happen not only at high congestion levels, but also when there are very few vehicles on the road.[34]
City planningandurban designpractices can have a huge impact on levels of future traffic congestion, though they are of limited relevance for short-term change.
Congestion can be reduced by either increasing road capacity (supply), or by reducing traffic (demand). Capacity can be increased in a number of ways, but needs to take account oflatent demandotherwise it may be used more strongly than anticipated. Critics of the approach of adding capacity have compared it to "fightingobesityby letting out your belt" (inducing demand that did not exist before). For example, when new lanes are created, households with a second car that used to be parked most of the time may begin to use this second car for commuting.[40][41]Reducing road capacity has in turn been attacked as removing free choice as well as increasing travel costs and times, placing an especially high burden on the low income residents who must commute to work.[citation needed]
Increased supply can include:
Reduction of demand can include:
Use of so-calledintelligent transportation systems, which guide traffic:
Traffic during peak hours in major Australian cities, such as Sydney, Melbourne, Brisbane and Perth, is usually very congested and can cause considerable delay for motorists. Australians rely mainly on radio and television to obtain current traffic information. GPS,webcams, and online resources are increasingly being used to monitor and relay traffic conditions to motorists.[citation needed]Based on a survey in 2024, Brisbane is the most congested cities in Australia and 10th in the world, with drivers averagely losing 84 hours throughout the year.[67]
Traffic jams have become intolerable in Dhaka. Some other major reasons are the total absence of arapid transitsystem; the lack of an integrated urban planning scheme for over 30 years;[68]poorly maintained road surfaces, with potholes rapidly eroded further by frequent flooding and poor or non-existent drainage;[69]haphazard stopping and parking;[70]poor driving standards;[71]total lack of alternative routes, with several narrow and (nominally) one-way roads.[72][73]
According toTimemagazine,São Paulohas the world's worst daily traffic jams.[9]Based on reports from theCompanhia de Engenharia de Tráfego, the city's traffic management agency, the historical congestion record was set on May 23, 2014, with 344 kilometres (214 mi) of cumulative queues around the city during the evening rush hour.[74]The previous record occurred on November 14, 2013, with 309 kilometres (192 mi) of cumulative queues.[74]
Despite implementation since 1997 ofroad space rationingby the last digit of the plate number during rush hours every weekday, traffic in this 20-million-strong city still experiences severe congestion. According to experts, this is due to the accelerated rate of motorization occurring since 2003 and the limited capacity ofpublic transport. In São Paulo, traffic is growing at a rate of 7.5% per year, with almost 1,000 new cars bought in the city every day.[75]The subway has only 61 kilometres (38 mi) of lines, though 35 further kilometers are under construction or planned by 2010. Every day, many citizens spend between three up to four hours behind the wheel. In order to mitigate the aggravating congestion problem, since June 30, 2008, the road space rationing program was expanded to include and restrict trucks and light commercial vehicles.[76][77]
According to the Toronto Board of Trade, in 2010,Torontois ranked as the most congested city of 19 surveyed cities, with an average commute time of 80 minutes.[80]
TheChinesecity ofBeijingstarted alicense plate rationingsince the2008 Summer Olympicswhereby each car is banned from the urban core one workday per week, depending on the last digit of its license plate. As of 2016, 11 major Chinese cities have implemented similar policies.[81]Towards the end of 2010, Beijing announced a series of drastic measures to tackle the city's chronic traffic congestion, such as limiting the number of new plates issued to passenger cars to 20,000 a month, barring vehicles with non-Beijing plates from entering areas within the Fifth Ring Road during rush hours and expanding itssubway system.[82]The government aims to cap the number of locally registered cars in Beijing to below 6.3 million by the end of 2020.[83]In addition, more than nine major Chinese cities includingShanghai,GuangzhouandHangzhoustarted limiting the number of new plates issued to passenger cars in an attempt to curb the growth of car ownership.[84][85]In response to the increased demand to public transit caused by these policies, aggressive programs torapidly expandpublic transport systems in many Chinese cities are currently underway.[86]
A unique Chinese phenomenon of severe traffic congestion occurs duringChunyun Periodor Spring Festival travel season.[87]It is a long-held tradition for most Chinese people to reunite with their families duringChinese New Year. People return to their hometown to have areunion dinnerwith their families onChinese New Year. It has been described as the largest annual human migration in the world.[88][89]Since theeconomic boomandrapid urbanizationof China since the late 1970s, many people work and study a considerable distance from their hometowns. Traffic flow is typically directional, with large amounts of the population working in more developed coastal provinces needing travel to their hometowns in the less developed interior. The process reverses near the end of Chunyun. With almost 3 billion trips[90]made in 40 days of the 2016 Chunyun Period, the Chinese intercity transportation network is extremely strained during this period.
The August 2010China National Highway 110 traffic jaminHebeiprovince caught media attention for its severity, stretching more than 100 kilometres (62 mi) from August 14 to 26, including at least 11 days of totalgridlock.[91][92][93]The event was caused by a combination of road works and thousands of coal trucks fromInner Mongolia's coalfields that travel daily to Beijing. TheNew York Timeshas called this event the "Great Chinese Gridlock of 2010."[93][94]The congestion is regarded as the worst in history by duration, and is one of the longest in length after the 175 kilometres (109 mi) long Lyon-Paris traffic jam in France on February 16, 1980.
Recently, in HangzhouCity Brainhas become active, reducing traffic congestion somewhat.[95]
A 2021 study of subway constructions in China found that in the first year of a new subway line, road congestion declined.[96]
Since the 70s, the traffic on the streets of Athens has increased dramatically, with the existing road network unable to serve the ever-increasing demand. In addition, it has also caused an environmental burden, such as thephotochemical smog. To deal with it, theDaktylioshas been enforced.
The number of vehicles in India is quickly increasing as a growing middle class can now afford to buy cars. India's road conditions have not kept up with the exponential growth in number of vehicles.
Various causes for this include:
According to a 2015 study by motor oil companyCastrol,Jakartais found to be the worst city in the world for traffic congestion. Relying on information fromTomTomnavigation devices in 78 countries, the index found that drivers are stopping and starting their cars 33,240 times per year on the road. After Jakarta, the worst cities for traffic areIstanbul,Mexico City,Surabaya, andSt. Petersburg.[97]
Daily congestion in Jakarta is not a recent problem. The expansion of commercial area withoutroad expansionshows worsening daily congestion even in main roads such asJalan Jenderal Sudirman,Jalan M.H. Thamrin, andJalan Gajah Madain the mid-1970s.[98]
In 2016, 22 people died as a result of traffic congestion in Java. They were among those stuck in a three-day traffic jam at atollexit inBrebes,Central JavacalledBrebes Exitor 'Brexit'. The traffic block stretched for 21 km here and thousands of cars clogged the highway. Many people died because of carbon monoxide poisoning, fatigue or heat.[99]
New Zealand has followed strongly car-oriented transport policies since after World War II (especially inAuckland, where one third of the country's population lives, is New Zealand's most traffic congested city, and has been labeled worse than New York for traffic congestion with commuters sitting in traffic congestion for 95 hours per year),[100]and currently has one of the highest car-ownership rates per capita in the world, after the United States.[101]Traffic congestion in New Zealand is increasing with drivers on New Zealand's motorways reported to be struggling to exceed 20 km/h on an average commute, sometimes crawling along at 8 km/h for more than half an hour.
According to a survey byWaze, traffic congestion inMetro Manilais called the "worst" in the world, afterRio de Janeiro,São Paulo, andJakarta.[102]It is worsened byviolations of traffic laws, likeillegal parking, loading and unloading,beating the red light, andwrong-way driving.[103]Traffic congestion in Metro Manila is caused by the large number of registered vehicles, lack of roads, andoverpopulation, especially in the cities ofManilaandCaloocan, as well as the municipality ofPateros.[104]
Traffic caused losses of ₱137,500,000,000 on the economy in 2011, and unbuilt roads and railway projects also causes worsening congestion.[105]The Japan International Cooperation Agency (JICA) feared that daily economic losses will reach Php 6,000,000,000 by 2030 if traffic congestion cannot be controlled.[106]
In recent years, theIstanbul Metropolitan Municipalityhas made huge investments onintelligent transportation systemsandpublic transportation. Despite that, traffic is a significant problem inIstanbul.Istanbulhas chosen the second most congested[107]and the most sudden-stopping traffic in the world.[108]Travel times in Turkey's largest city take on average 55 percent longer than they should, even in relatively less busy hours.[109]
In the United Kingdom the inevitability of congestion in some urban road networks has been officially recognized since theDepartment for Transportset down policies based on the reportTraffic in Townsin 1963:
Even when everything that it is possibly to do by way of building new roads and expanding public transport has been done, there would still be, in the absence of deliberate limitation, more cars trying to move into, or within our cities than could possibly be accommodated.[110]
The Department for Transport sees growing congestion as one of the most serious transport problems facing the UK.[111]On December 1, 2006,Rod Eddingtonpublished a UK government-sponsoredreport into the future of Britain's transport infrastructure. The Eddington Transport Study set out the case for action to improve road and rail networks, as a "crucial enabler of sustained productivity and competitiveness". Eddington has estimated that congestion may cost the economy of England £22 bn a year in lost time by 2025. He warned that roads were in serious danger of becoming so congested that the economy would suffer.[112]At the launch of the report Eddington told journalists and transport industry representatives introducingroad pricingto encourage drivers to drive less was an "economic no-brainer". There was, he said "no attractive alternative". It would allegedly cut congestion by half by 2025, and bring benefits to the British economy totaling £28 bn a year.[113]
Acongestion chargefor driving in central London was introduced in 2003. In 2013, ten years later,Transport for Londonreported that the scheme resulted in a 10% reduction in traffic volumes from baseline conditions, and an overall reduction of 11% in vehicle kilometers in London. Despite these gains, traffic speeds in central London became progressively slower.
TheTexas Transportation Instituteestimated that, in 2000, the 75 largest metropolitan areas experienced 3.6 billion vehicle-hours of delay, resulting in 5.7 billion U.S. gallons (21.6 billion liters) in wasted fuel and $67.5 billion in lost productivity, or about 0.7% of the nation'sGDP. It also estimated that the annual cost of congestion for each driver was approximately $1,000 in very large cities and $200 in small cities. Traffic congestion is increasing in major cities and delays are becoming more frequent in smaller cities and rural areas.
30% of traffic is cars looking for parking.[114]
According to traffic analysis firmINRIXin 2019,[115]the top 31 worst US traffic congested cities (measured in average hours wasted per vehicle for the year) were:
The most congested highway in the United States, according to a 2010 study of freight congestion (truck speed and travel time), is Chicago'sInterstate 290at theCircle Interchange. The average truck speed was just 29 mph (47 km/h).[116]
Bianchi Alves, B., & Darido, G. (February 7, 2016). Sustainable cities, two related challenges: high quality mobility on foot and efficient urban logistics (Part II). Retrieved November 2, 2019, fromhttps://blogs.worldbank.org/transport/sustainable-cities-two-related-challenges-high-quality-mobility-foot-and-efficient-urban-logistics-1.
2019 Top 100 Truck Bottlenecks. (February 14, 2019). Retrieved November 3, 2019, fromhttps://truckingresearch.org/2019/02/06/atri-2019-truck-bottlenecks/.
Haag, M., & Hu, W. (October 27, 2019). 1.5 Million Packages a Day: The Internet Brings Chaos to N.Y. Streets. Retrieved November 1, 2019, fromhttps://www.nytimes.com/2019/10/27/nyregion/nyc-amazon-delivery.html?searchResultPosition=1.
Popovich, N., & Lu, D. (October 10, 2019). The Most Detailed Map of Auto Emissions in America. Retrieved November 1, 2019, fromhttps://www.nytimes.com/interactive/2019/10/10/climate/driving-emissions-map.html?module=inline.
Reed, S. (September 21, 2018). In London, Electric Trucks Are Helping UPS Make 'Eco-Friendly' Deliveries. Retrieved November 3, 2019, fromhttps://www.nytimes.com/2018/09/21/business/energy-environment/electric-ups-trucks-in-london.html?module=inline.
Rooney, K. (April 3, 2019). Online shopping overtakes a major part of retail for the first time ever. Retrieved November 2, 2019, fromhttps://www.cnbc.com/2019/04/02/online-shopping-officially-overtakes-brick-and-mortar-retail-for-the-first-time-ever.html.
|
https://en.wikipedia.org/wiki/Traffic_jam
|
Ingraph theory, aflow network(also known as atransportation network) is adirected graphwhere each edge has acapacityand each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often inoperations research, a directed graph is called anetwork, the vertices are callednodesand the edges are calledarcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is asource, which has only outgoing flow, orsink, which has only incoming flow. A flow network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. As such, efficient algorithms for solving network flows can also be applied to solve problems that can be reduced to a flow network, including survey design, airline scheduling,image segmentation, and thematching problem.
Anetworkis a directed graphG= (V,E)with a non-negativecapacityfunctioncfor each edge, and without multiple arcs (i.e. edges with the same source and target nodes).Without loss of generality, we may assume that if(u,v) ∈E, then(v,u)is also a member ofE. Additionally, if(v,u) ∉Ethen we may add(v,u)toEand then set thec(v,u) = 0.
If two nodes inGare distinguished – one as the sourcesand the other as the sinkt– then(G,c,s,t)is called aflow network.[1]
Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such aswhat is the maximum number of units that can be transferred from the source node s to the sink node t?The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other.
Theexcessfunctionxf:V→ ℝrepresents the net flow entering a given nodeu(i.e. the sum of the flows enteringu) and is defined byxf(u)=∑w∈Vf(w,u)−∑w∈Vf(u,w).{\displaystyle x_{f}(u)=\sum _{w\in V}f(w,u)-\sum _{w\in V}f(u,w).}A nodeuis said to beactiveifxf(u) > 0(i.e. the nodeuconsumes flow),deficientifxf(u) < 0(i.e. the nodeuproduces flow), orconservingifxf(u) = 0. In flow networks, the sourcesis deficient, and the sinktis active.
Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions.
Thevalue|f|of a feasible flowffor a network, is the net flow into the sinktof the flow network, that is:|f| =xf(t). Note, the flow value in a network is also equal to the total outgoing flow of sources, that is:|f| = −xf(s). Also, if we defineAas a set of nodes inGsuch thats∈Aandt∉A, the flow value is equal to the total net flow going out of A (i.e.|f| =fout(A) −fin(A)).[2]The flow value in a network is the total amount of flow fromstot.
Flow decomposition[3]is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters.
We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc:
Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.[citation needed]
Theresidual capacityof an arcewith respect to a pseudo-flowfis denotedcf, and it is the difference between the arc's capacity and its flow. That is,cf(e) =c(e) −f(e). From this we can construct aresidual network, denotedGf(V,Ef), with a capacity functioncfwhich models the amount ofavailablecapacity on the set of arcs inG= (V,E). More specifically, capacity functioncfof each arc(u,v)in the residual network represents the amount of flow which can be transferred fromutovgiven the current state of the flow within the network.
This concept is used inFord–Fulkerson algorithmwhich computes themaximum flowin a flow network.
Note that there can be an unsaturated path (a path with available capacity) fromutovin the residual network, even though there is no such path fromutovin the original network.[citation needed]Since flows in opposite directions cancel out,decreasingthe flow fromvtouis the same asincreasingthe flow fromutov.
Anaugmenting pathis a path(u1,u2, ...,uk)in the residual network, whereu1=s,uk=t, andfor allui,ui+ 1(cf(ui,ui+ 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is atmaximum flowif and only if there is no augmenting path in the residual networkGf.
Thebottleneckis the minimum residual capacity of all the edges in a given augmenting path.[2]See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow.
The term "augmenting the flow" for an augmenting path means updating the flowfof each arc in this augmenting path to equal the capacitycof the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck.
Sometimes, when modeling a network with more than one source, asupersourceis introduced to the graph.[4]This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called asupersink.[5]
In Figure 1 you see a flow network with source labeleds, sinkt, and four additional nodes. The flow and capacity is denotedf/c{\displaystyle f/c}. Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow fromstotis 5, which can be easily seen from the fact that the total outgoing flow fromsis 5, which is also the incoming flow tot. By the skew symmetry constraint, fromctoais -2 because the flow fromatocis 2.
In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge(d,c){\displaystyle (d,c)}. This network is not atmaximum flow. There is available capacity along the paths(s,a,c,t){\displaystyle (s,a,c,t)},(s,a,b,d,t){\displaystyle (s,a,b,d,t)}and(s,a,b,d,c,t){\displaystyle (s,a,b,d,c,t)}, which are then the augmenting paths.
The bottleneck of the(s,a,c,t){\displaystyle (s,a,c,t)}path is equal tomin(c(s,a)−f(s,a),c(a,c)−f(a,c),c(c,t)−f(c,t)){\displaystyle \min(c(s,a)-f(s,a),c(a,c)-f(a,c),c(c,t)-f(c,t))}=min(cf(s,a),cf(a,c),cf(c,t)){\displaystyle =\min(c_{f}(s,a),c_{f}(a,c),c_{f}(c,t))}=min(5−3,3−2,2−1){\displaystyle =\min(5-3,3-2,2-1)}=min(2,1,1)=1{\displaystyle =\min(2,1,1)=1}.
Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet.
Flows can pertain to people or material over transportation networks, or to electricity overelectrical distributionsystems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent toKirchhoff's current law.
Flow networks also find applications inecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in afood web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed byRobert Ulanowiczand others, involves using concepts frominformation theoryandthermodynamicsto study the evolution of these networks over time.
The simplest and most common problem using flow networks is to find what is called themaximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such asbipartite matching, theassignment problemand thetransportation problem. Maximum flow problems can be solved inpolynomial timewith various algorithms (see table). Themax-flow min-cut theoremstates that finding a maximal network flow is equivalent to finding acutof minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another.
Richard Peng, Maximilian Probst Gutenberg,
Sushant Sachdeva
In amulti-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through thesametransportation network.
In aminimum cost flow problem, each edgeu,v{\displaystyle u,v}has a given costk(u,v){\displaystyle k(u,v)}, and the cost of sending the flowf(u,v){\displaystyle f(u,v)}across the edge isf(u,v)⋅k(u,v){\displaystyle f(u,v)\cdot k(u,v)}. The objective is to send a given amount of flow from the source to the sink, at the lowest possible price.
In acirculation problem, you have a lower boundℓ(u,v){\displaystyle \ell (u,v)}on the edges, in addition to the upper boundc(u,v){\displaystyle c(u,v)}. Each edge also has a cost. Often, flow conservation holds forallnodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow withℓ(t,s){\displaystyle \ell (t,s)}andc(t,s){\displaystyle c(t,s)}. The flowcirculatesthrough the network, hence the name of the problem.
In anetwork with gainsorgeneralized networkeach edge has again, a real number (not zero) such that, if the edge has gaing, and an amountxflows into the edge at its tail, then an amountgxflows out at the head.
In asource localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks.[8]
|
https://en.wikipedia.org/wiki/Flow_network
|
Deadline-monotonic priority assignmentis a priority assignment policy used withfixed-priority pre-emptive scheduling.
With deadline-monotonicpriorityassignment,tasksare assigned priorities according to theirdeadlines. The task with the shortest deadline is assigned the highest priority.[1]This priority assignment policy is optimal for a set of periodic or sporadic tasks which comply with the following system model:
If restriction 7 is lifted, then "deadline minus jitter" monotonic priority assignment is optimal.
If restriction 1 is lifted, allowing deadlines greater than periods, then Audsley's optimal priority assignmentalgorithmmay be used to find the optimal priority assignment.
Deadline monotonic priority assignment is not optimal for fixed priority non-pre-emptive scheduling.
A fixed priority assignment policy P is referred to as optimal if no task set exists which is schedulable using a different priority assignment policy which is not also schedulable using priority assignment policy P. Or in other words: Deadline-monotonic priority assignment (DMPA) policy is optimal if any process set, Q, that is schedulable by priority scheme W, is also schedulable by DMPA[2]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Deadline-monotonic_scheduling
|
DDC-I, Inc.is aprivately held companyproviding software development ofreal-time operating systems,software development tools, and software services forsafety-criticalembedded applications, headquartered inPhoenix, Arizona. It was first created in 1985 as the Danish firmDDC International A/S(also known asDDC-I A/S), a commercial outgrowth ofDansk Datamatik Center, a Danish software research and development organization of the 1980s. The American subsidiary was created in 1986. For many years, the firm specialized inlanguage compilersfor theprogramming languageAda.
In 2003, the Danish office was closed and all operations moved to the Phoenix location.
The origins of DDC International A/S lay inDansk Datamatik Center, a Danish software research and development organization that was formed in 1979 to demonstrate the value of using modern techniques, especially those involvingformal methods, in software design and development. Among its several projects was the creation of a compiler system for the programming languageAda. Ada was a difficult language to implement and early compiler projects for it often proved disappointments.[1]But the DDC compiler design was sound and it first passed theUnited States Department of Defense-sponsoredAda Compiler Validation Capability(ACVC) standardized suite of language and runtime tests on aVAX/VMSsystem in September 1984.[2]As such, it was the first European Ada compiler to meet this standard.[3][4]
Success of the Ada project led to a separate company being formed in 1985, called DDC International A/S, with the purpose of commercializing the Ada compiler system product.[5]Like its originator, it was based inLyngby,Denmark. Ole N. Oest was named the managing director of DDC International.[6]In 1986, DDC-I, Inc. was founded as the American subsidiary company.[7]Located inPhoenix, Arizona, it focused on sales, customer support, and engineering consulting activities in the United States.[8]
DDC-I established a business in selling the Ada compiler system product, named DACS, directly to firms, both as software to develop projects in Ada with, and assource codeto computer makers and others, who would rehost or retarget it to otherprocessorsandoperating systems.[9][10]
The first business sold both native compilers andcross compilers, with the latter more common since Ada was primarily used in theembedded systemsrealm. One of the first cross compilers that DDC-I developed was from VAX/VMS to theIntel 8086andIntel 80286; the effort was already underway by early 1985.[9]It began as a joint venture with the Italian defense electronics companySeleniathat would target both their MARA-860 and MARA-286 multi-microprocessor computers, based on the 8086 and 80286 architectures, and generic embedded and OS-hosting 8086 and 80286 systems.[11]This work was the start of what would become the largest-selling product line for the firm. DDC-I developed a reputation for quality Ada cross compilers and runtime systems forIntel 80x86processors.[8]
The second business made use of what became termed the DDC OEM Compiler Kit,[10]who could be using the Ada front end for compilers to other hosts or targets or for other tools such asVLSI. In a September 1985 meeting inLund, Sweden, several of the OEM Kit customers formed the DDC Ada Compiler Retargeter's Group.[12]It held at least three meetings over the course of 1985 and 1986. The early OEM customers included theUniversity of Lund,Defence Materiel Administration, andEricsson Radio Systemsin Sweden;SoftplanandNokia Information Systemsin Finland;SeleniaandOlivettiin Italy;ICL Defence SystemsandSTL Ltdin the United Kingdom;Aitech Software Engineeringin Israel; andAdvanced Computer Techniques,Rockwell Collins,Control Data Corporation, andGeneral Systems Groupin the United States.[13]
Later developers were often less well versed in formal methods and did not use them in their work on the compiler.[14]This was even more so in the case of companies retargeting the compiler, many of which were unfamiliar with the Ada language.[15][16]
DDC-I was in the same market as several other Ada compiler firms, includingAlsys,TeleSoft,Verdix,Tartan Laboratories, andTLD Systems.[4](DDC-I would go on to stay in business longer than any of these others.[14])
As with other Ada compiler vendors, much of the time of DDC-I engineers was spent in conforming to the large, difficult ACVC tests.[17][18]
Starting in 1988 and continuing for several years, DDC-I consultants collaborated withHoneywell Air Transport Systemsto retarget and optimize the DDC-I Ada compiler to theAMD 29050processor.[19][20]This DDC-I-based cross compiler system was used to develop the primary flight software for theBoeing 777airliner.[8][20]This software, named theAirplane Information Management System, would become arguably the best-known of any Ada-in-use project, civilian or military.[21]Some 550 developers at Honeywell worked on the flight system and it was publicized as a major Ada success story.[20]
In October 1991, it was announced that DDC-I had acquired the Ada andJOVIALlanguage embedded systems business ofInterACT, which had become a venture of Advanced Computer Techniques.[22]This wholly owned New York-based entity was briefly named DDC-Inter[22]before being subsumed into DDC-I proper. This brought Ada cross compilers for theMIL-STD-1750AandMIPS R3000processors, and JOVIAL language cross compilers for the MIL-STD-1750A andZilog Z8002into the product line. The MIPS product was one which DDC-I emphasised, with engineering efforts that included automatic recognition of certain tasking optimizations,[23]and work in the U.S. Air Force-sponsored Common Ada Runtime System (CARTS) project towards providing standard interfaces into Ada runtime environments.[24][25]
At the end of 1993, the New York office was closed, and its work transferred to the Phoenix office.
By the early 1990s, DDC-I offered Ada native compilers for VAX/VMS,Sun-3andSPARCunderSunOS, andIntel 80386underUNIX System VandOS/2, and offered cross compilers for theMotorola 680x0andIntel i860in addition to the abovementioned targets.[26][27]
In the early 1990s, DDC-I worked on redesigning the compiler system for the wide-ranging Ada 95 revision of the language standard. They used a newobject-based programmingdesign and still adhered to a formal methods approach as well, usingVDM-SL.[28]The work was done under sponsorship of the European Community-basedOpen Microprocessor Initiative's Global Language and Uniform Environment -project (OMI/GLUE), where DDC-I's role was to create a compiler targeting theArchitecture Neutral Distribution Format(ANDF) intermediate form, with the intention of bringing Ada 95 to more platforms quickly.[28][29]As part of this work, DDC-I collaborated with theDefence Evaluation and Research Agencyin expanding some of ANDF's abilities to express semantics of Ada and the fast-growing programming languageC++.[30]Work in Ada-specific areas, such asbounds-checking elimination, was done to get optimal run-time performance.[31]
The Ada software environment was originally thought to be a promising market.[32]But the Ada compiler business proved to be a difficult one to be in.[33]During this time, 1987–97, a U.S. government mandate for Ada use was in effect, albeit with some waivers granted.[34]Many of the advantages of the language for general-purpose programming were not seen as such by the general software engineering community or by educators.[35]The sales situation was challenging, with periodic small layoffs. Despite consolidation among other Ada tool providers, DDC-I remained an independent company.[36]
In any case, DDC-I was an enthusiastic advocate of the Ada language, for use in the company[37]and externally. A paper one of its engineers published in 1993 assessed Ada 95's object-oriented features favorably to those of C++ and attracted some attention.[38]
At the same time, the firm attempted to expand and augment its product line. The RAISE toolset was available, as was Cedar, a design tool for real-time systems. Also offered wasBeologic, a tool to develop and run state/event parts of applications, that had been licensed fromBang & Olufsenand integrated with the Ada compiler system.[39]The biggest effort was in the direction of C++. DDC-I began offering 1st Object Exec, a C++-basedreal-time operating systemintended for direct, object-level support of embedded applications.[40]Despite considerable efforts during 1993–94, 1st Object Exec failed to gain traction in the marketplace.
The one area where Ada did gain a solid foothold was in real-time, high-reliability, high-integrity, safety-critical applications such as aerospace.[41][34][42]Based on its experience with Honeywell and other customers, DDC-I acquired expertise in the mapping of Ada language and runtime features to the requirements of safety-critical certifications, in particular those for theDO-178B(Software Considerations in Airborne Systems and Equipment Certification) standard, and provided tools for that process.[43]Such applications continued even after the Ada mandate was dropped in 1997.[34]For instance, in 1997 the firm was awarded a joint contract withSikorsky AircraftandBoeing Defense & Space Group's Helicopters Division to develop software to be used in theBoeing/Sikorsky RAH-66 Comanche.[44]
In March 1998, DDC-I acquired fromTexas Instrumentsthe development and sales and marketing rights to the Tartan Ada compilers for theIntel i960, Motorola 680x0, and MIL-STD-1750A targets.[45]
Support for mixed language development was added in 2000 with the addition of the programming languageCas part of DDC-I's mixed-language integrated development environment for SCORE (for Safety-Critical, Object-oriented, Real-time Embedded).[46]Leveraging the ANDF format, theDWARFstandardized debugging format, and the OMI protocol for communicating with target board debug monitors, SCORE was able to provide a common building and debugging environment for real-time application developers.[46]Support forEmbedded C++was added to SCORE in 2003, by which time it could integrate with a variety of target board scenarios on Intel x86 andPower PCprocessors.[47]The C and Embedded C++ compilers for ANDF came from a licensing arrangement for theTenDRA Compiler(later DDC-I became the maintainer of those compilers). Subsequently, Ada 95 support for the older 1750A andTMS320C4xprocessors was added to SCORE.[48]
By April 2003 the industry move away from Ada and the declining position of the aircraft industry had taken its toll and DDC-I suffered significant financial losses.
DDC-I decided to close its Denmark office in Lyngby and move all operations to Phoenix.[49]
In September 2005, the company named Bob Morris, formerly ofLynuxWorks, as its president and chief executive officer.[50]Oest became Chief Technology Officer.[51]In April 2006, DDC-I moved to new offices in northern Phoenix, stating that it was expanding and that it expected revenue to grow 40–50 percent over the previous year.[52]
Since 2006, the company has been contributing to theJava Expert Groupfor Safety Critical Java.[53]This work, which uses theReal-time specification for Javaas a base and then specifies language and library subsets and coding rules for use to provide sufficient determinism, is seen by the firm's representatives as making Java possibly equal or superior to either Ada or C++ as a language for safety-critical applications.[54]The company has viewed the safety-critical Java profile as one that can help the defense industry deal with the issue of aging software and hardware applications.[55]By 2008, DDC-I was referring to Ada as alegacylanguage and offering semi-automated tools and professional services to help customers migrate to newer solutions.[51]
In November 2008, the company entered the embeddedreal-time operating system(RTOS) market with two products, Deos and HeartOS.[56][57][58]Both were based on underlying software technology originated atHoneywell Internationaland already deployed on many commercial and military aircraft.[56]As part of the action, DDC-I hired some of the key Honeywell engineering staff who had designed Deos.[56]Other firms in the same RTOS market segment as DDC-I includeLynuxWorks,Wind River Systems,SYSGO, andExpress Logic.[59]
Following its entry into the RTOS market segment in 2008, products and services associated with the Deos RTOS quickly became the core business focus and primary area of R&D investment for DDC-I. Major additions to the Deos product line and the year of introduction include: 2011 -ARINC 653interface support, 2014 - expansion of support forARM Cortex-Abased processors in addition to the existing support forx86andPowerPCprocessors, 2015 - addition of support for theFuture Airborne Capability Environment (FACE)Safety Base Operating System Segment (OSS) profile, 2017 - multicore processor support via its SafeMC Technology, 2019 - received FACE Conformance Certificate for OSS Safety Base profile to FACE Technical Standard, Edition 3.0, 2021 - first RTOS to receive a FACE Conformance Certificate for OSS Safety Base and Extended profiles to FACE Technical Standard, Edition 3.1, 2023 - completion of second multicoreDO-178CDesign Assurance Level A (DAL A) verification on multipleARMandPowerPCprocessors.
|
https://en.wikipedia.org/wiki/Deos
|
Real-Time Executive for Multiprocessor Systems(RTEMS), formerlyReal-Time Executive for Missile Systems, and thenReal-Time Executive for Military Systems, is areal-time operating system(RTOS) designed forembedded systems. It isfree and open-source software.
Development began in the late 1980s with early versions available viaFile Transfer Protocol(ftp) as early as 1993. OAR Corporation managed the RTEMS project in cooperation with a steering committee until the early 2000's when project management evolved into a subset of the core developers managing the project. In 2014, hosting was moved from OAR Corporation to the Oregon State UniversityOpen Source Lab hosting.
RTEMS is designed for real-time embedded systems and to support various open application programming interface (API) standards including Portable Operating System Interface (POSIX) andμITRON(dropped in RTEMS 4.10[2]). The API now known as the Classic RTEMS API was originally based on the Real-Time Executive Interface Definition (RTEID) specification. RTEMS includes aportof theFreeBSDInternet protocol suite(TCP/IP stack) and support for variousfile systemsincludingNetwork File System(NFS) andFile Allocation Table(FAT).
RTEMS provides extensive multi-processing and memory-management services.[3]
RTEMS has been ported to various target processor architectures:
RTEMS is used in many application domains. The Experimental Physics and Industrial Control System (EPICS) community includes multiple people who are active RTEMS submitters. RTEMS is also popular for space uses since it supports multiple microprocessors developed for use in space includingSPARCERC32andLEON,MIPS,ColdFire, andPowerPCarchitectures, which are available in space hardened models. RTEMS is currently orbiting Mars as part of theElectra software radioonNASA'sMars Reconnaissance Orbiter,[4]and theESA'sTrace Gas Orbiter.,[5]as well as passing by the sun on theParker Solar Probe.
RTEMS components are currently licensed under a mixture of licenses including a GPL-2.0[6]derived license with the project working on trying to re-license original components to the project under thetwo paragraph BSD license.[7][8]
RTEMS was originally distributed under a modifiedGNU General Public License(GPL), allowing linking RTEMS objects with other files without needing the full executable to be covered by the GPL. This license is based on theGNAT Modified General Public Licensewith the language modified to not be specific to the programming languageAda.
|
https://en.wikipedia.org/wiki/RTEMS
|
Inqueueing theory, a discipline within the mathematicaltheory of probability,Kingman's formula, also known as the VUT equation, is an approximation for the mean waiting time in aG/G/1 queue.[1]The formula is the product of three terms which depend on utilization (U), variability (V) and service time (T). It was first published byJohn Kingmanin his 1961 paperThe single server queue in heavy traffic.[2]It is known to be generally very accurate, especially for a system operating close to saturation.[3]
Kingman's approximation states:
whereE(Wq){\displaystyle \mathbb {E} (W_{q})}is the mean waiting time,τis the mean service time (i.e.μ= 1/τis the service rate),λis the mean arrival rate,ρ=λ/μis the utilization,cais thecoefficient of variationfor arrivals (that is the standard deviation of arrival times divided by the mean arrival time) andcsis the coefficient of variation for service times.
|
https://en.wikipedia.org/wiki/Kingman%27s_formula
|
Advanced planning and scheduling(APS, also known asadvanced manufacturing) refers to amanufacturing management processby whichraw materialsand production capacity are optimally allocated to meet demand.[1]APS is especially well-suited to environments where simpler planning methods cannot adequately address complex trade-offs between competing priorities. Production scheduling is intrinsically very difficult due to the (approximately)factorialdependence of the size of the solution space on the number of items/products to be manufactured.
Traditionalproduction planningandschedulingsystems (such asmanufacturing resource planning) use a stepwise procedure to allocate material and production capacity. This approach is simple but cumbersome, and does not readily adapt to changes in demand, resource capacity or material availability. Materials and capacity are planned separately, and many systems do not consider material or capacity constraints, leading to infeasible plans. However, attempts to change to the new system have not always been successful, which has called for the combination of management philosophy with manufacturing.
Unlike previous systems, APS simultaneously plans and schedules production based on available materials, labor and plant capacity.
APS has commonly been applied where one or more of the following conditions are present:
Advanced planning & scheduling software enables manufacturing scheduling and advanced scheduling optimization within these environments.
[1]
|
https://en.wikipedia.org/wiki/Advanced_planning_and_scheduling
|
AGantt chartis abar chartthat illustrates aproject schedule.[1]It was designed and popularized byHenry Ganttaround the years 1910–1915.[2][3]Modern Gantt charts also show thedependencyrelationships between activities and the current schedule status.
A Gantt chart is a type of bar chart[4][5]that illustrates a project schedule.[6]This chart lists the tasks to be performed on the vertical axis, and time intervals on the horizontal axis.[4][7]The width of the horizontal bars in the graph shows the duration of each activity.[7][8]Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of aproject.[1]Terminal elements and summary elements constitute thework breakdown structureof the project. Modern Gantt charts also show thedependency(i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status using percent-complete shadings and a vertical "TODAY" line.
Gantt charts are sometimes equated with bar charts.[8][9]
Gantt charts are usually created initially using anearly start time approach, where each task is scheduled to start immediately when its prerequisites are complete. This method maximizes thefloat timeavailable for all tasks.[4]
Widely used in project planning in the present day, Gantt charts were considered revolutionary when introduced.[10]The first known tool of this type was developed in 1896 byKarol Adamiecki, who called it aharmonogram.[11]Adamiecki, however, published his chart only in Russian and Polish which limited both its adoption and recognition of his authorship.
In 1912,Hermann Schürch[de]published what could be considered Gantt charts while discussing a construction project. Charts of the type published by Schürch appear to have been in common use in Germany at the time;[12][13][14]however, the prior development leading to Schürch's work is unclear.[15]Unlike later Gantt charts, Schürch's charts did not display interdependencies, leaving them to be inferred by the reader. These were also static representations of a planned schedule.[16]
The chart is named afterHenry Gantt(1861–1919), who designed his chart around the years 1910–1915.[2][3]Gantt originally created his tool for systematic, routine operations. He designed this visualization tool to more easily measureproductivitylevels ofemployeesand gauge which employees were under- or over-performing. Gantt also frequently includedgraphicsand other visual indicators in his charts to track performance.[17]
One of the first major applications of Gantt charts was by the United States duringWorld War I, at the instigation ofGeneral William Crozier.[18]
The earliest Gantt charts were drawn on paper and therefore had to be redrawn entirely in order to adjust to schedule changes. For many years, project managers used pieces of paper or blocks for Gantt chart bars so they could be adjusted as needed.[19]Gantt's collaboratorWalter Polakovintroduced Gantt charts to theSoviet Unionin 1929 when he was working for theSupreme Soviet of the National Economy. They were used in developing theFirst Five Year Plan, supplying Russian translations to explain their use.[20][21]
In the 1980s,personal computersallowed widespread creation of complex and elaborate Gantt charts. The first desktop applications were intended mainly for project managers and project schedulers. With the advent of the Internet and increased collaboration over networks at the end of the 1990s, Gantt charts became a common feature of web-based applications, including collaborativegroupware.[citation needed]By 2012, almost all Gantt charts were made by software which can easily adjust to schedule changes.[19]
In 1999, Gantt charts were identified as "one of the most widely used management tools for project scheduling and control".[4]
In the following tables there are seven tasks, labeledathroughg. Some tasks can be done concurrently (aandb) while others cannot be done until their predecessor task is complete (canddcannot begin untilais complete). Additionally, each task has three time estimates: the optimistic time estimate (O), the most likely or normal time estimate (M), and the pessimistic time estimate (P). The expected time (TE) is estimated using thebeta probability distributionfor the time estimates, using the formula (O+ 4M+P) ÷ 6.
Once this step is complete, one can draw a Gantt chart or a network diagram.
In a progress Gantt chart, tasks are shaded in proportion to the degree of their completion: a task that is 60% complete would be 60% shaded, starting from the left. A vertical line is drawn at the time index when the progress Gantt chart is created, and this line can then be compared with shaded tasks. If everything is on schedule, all task portions left of the line will be shaded, and all task portions right of the line will not be shaded. This provides a visual representation of how the project and its tasks are ahead or behind schedule.[22]
Linked Gantt charts contain lines indicating the dependencies between tasks. However, linked Gantt charts quickly become cluttered in all but the simplest cases.Critical path network diagramsare superior to visually communicate the relationships between tasks.[23]Nevertheless, Gantt charts are often preferred over network diagrams because Gantt charts are easily interpreted without training, whereas critical path diagrams require training to interpret.[9]Gantt chart software typically provides mechanisms to link task dependencies, although this data may or may not be visually represented.[4]Gantt charts and network diagrams are often used for the same project, both being generated from the same data by a software application.[4]
|
https://en.wikipedia.org/wiki/Gantt_chart
|
Kanban(Japanese:かんばん[kambaɴ]meaningsignboard) is aschedulingsystem forlean manufacturing(also called just-in-time manufacturing, abbreviated JIT).[2]Taiichi Ohno, anindustrial engineeratToyota, developed kanban to improve manufacturing efficiency.[3]The system takes its name from the cards that track production within a factory. Kanban is also known as theToyota nameplate systemin the automotive industry.
A goal of the kanban system is to limit the buildup of excess inventory at any point in production. Limits on the number of items waiting at supply points are established and then reduced as inefficiencies are identified and removed. Whenever a limit is exceeded, this points to an inefficiency that should be addressed.[4]
In kanban, problem areas are highlighted by measuring lead time and cycle time of the full process and process steps.[5]One of the main benefits of kanban is to establish an upper limit towork in process(commonly referred as "WIP") inventory to avoid overcapacity. Other systems with similar effect exist, for exampleCONWIP.[6]A systematic study of various configurations of kanban systems, such asgeneralized kanban[7]orproduction authorization card(PAC)[8]andextended kanban,[9]of which CONWIP is an important special case, can be found in Tayur (1993), and more recently Liberopoulos and Dallery (2000), among other papers.[10][11][12][13][14]
The system originates from the simplest visual stock replenishment signaling system, an empty box. This was first developed in the UK factories producingSpitfiresduring theSecond World War, and was known as the "two bin system"[citation needed]. In the late 1940s, Toyota started studying supermarkets with the idea of applying shelf-stocking techniques to the factory floor. In a supermarket, customers generally retrieve what they need at the required time—no more, no less. Furthermore, the supermarket stocks only what it expects to sell in a given time, and customers take only what they need, because future supply is assured. This observation led Toyota to view a process as being a customer of one or more preceding processes and to view the preceding processes as a kind of store.
Kanban aligns inventory levels with actual consumption. A signal tells a supplier to produce and deliver a new shipment when a material is consumed. Thissignalis tracked through the replenishment cycle, bringing visibility to the supplier, consumer, and buyer.
Kanban uses the rate of demand to control the rate of production, passing demand from the end customer up through the chain of customer-store processes. In 1953, Toyota applied this logic in their main plant machine shop.[15]
A key indicator of the success of production scheduling based on demand,pushing,is the ability of thedemand-forecastto create such apush. Kanban, by contrast, is part of an approach where thepullcomes from demand and products aremade to order. Re-supply or production is determined according to customer orders.
In contexts where supply time is lengthy and demand is difficult to forecast, often the best one can do is to respond quickly to observed demand. This situation is exactly what a kanban system accomplishes, in that it is used as a demand signal that immediately travels through the supply chain. This ensures that intermediate stock held in the supply chain are better managed, and are usually smaller. Where the supply response is not quick enough to meet actual demand fluctuations, thereby causing potential lost sales, a stock building may be deemed more appropriate and is achieved by placing more kanban in the system.
Taiichi Ohno stated that to be effective, kanban must follow strict rules of use.[16]Toyota, for example, has six simple rules, and close monitoring of these rules is a never-ending task, thereby ensuring that the kanban does what is required.
Toyotahas formulated six rules for the application of kanban:[17]
Kanban cards are a key component of kanban and they signal the need to move materials within a production facility or to move materials from an outside supplier into the production facility. The kanban card is, in effect, a message that signals a depletion of product, parts, or inventory. When received, the kanban triggers replenishment of that product, part, or inventory. Consumption, therefore, drives demand for more production, and the kanban card signals demand for more product—so kanban cards help create a demand-driven system.
It is widely held[citation needed]by proponents oflean productionand manufacturing that demand-driven systems lead to faster turnarounds in production and lower inventory levels, helping companies implementing such systems be more competitive.
In the last few years, systems sending kanban signals electronically have become more widespread. While this trend is leading to a reduction in the use of kanban cards in aggregate, it is still common in modern lean production facilities to find the use of kanban cards. In various software systems, kanban is used for signalling demand to suppliers through email notifications. When stock of a particular component is depleted by the quantity assigned on kanban card, a "kanban trigger" is created (which may be manual or automatic), a purchase order is released with predefined quantity for the supplier defined on the card, and the supplier is expected to dispatch material within a specified lead-time.[18]
Kanban cards, in keeping with the principles of kanban, simply convey the need for more materials. A red card lying in an empty parts cart conveys that more parts are needed.
An example of a simple kanban system implementation is a "three-bin system" for the supplied parts, where there is no in-house manufacturing.[19]One bin is on the factory floor (the initial demand point), one bin is in the factory store (the inventory control point), and one bin is at the supplier. The bins usually have a removable card containing the product details and other relevant information, the classic kanban card.
When the bin on the factory floor is empty (because the parts in it were used up in a manufacturing process), the empty bin and its kanban card are returned to the factory store (the inventory control point). The factory store replaces the empty bin on the factory floor with the full bin from the factory store, which also contains a kanban card. The factory store sends the empty bin with its kanban card to the supplier. The supplier's full product bin, with its kanban card, is delivered to the factory store; the supplier keeps the empty bin. This is the final step in the process. Thus, the process never runs out of product—and could be described as a closed loop, in that it provides the exact amount required, with only one spare bin so there is never oversupply. This 'spare' bin allows for uncertainties in supply, use, and transport in the inventory system. A good kanban system calculates just enough kanban cards for each product. Most factories that use kanban use the colored board system (heijunka box).
Many manufacturers have implementedelectronic kanban(sometimes referred to as e-kanban[20]) systems.[21]These help to eliminate common problems such as manual entry errors and lost cards.[22]E-kanban systems can be integrated intoenterprise resource planning(ERP) systems, enabling real-time demand signaling across the supply chain and improved visibility. Data pulled from e-kanban systems can be used to optimize inventory levels by better tracking supplier lead and replenishment times.[23]
E-kanban is a signaling system that uses a mix of technology to trigger the movement of materials within a manufacturing or production facility. Electronic kanban differs from traditional kanban in using technology to replace traditional elements like kanban cards withbarcodesand electronic messages like email orelectronic data interchange.
A typical electronic kanban system marks inventory with barcodes, which workers scan at various stages of the manufacturing process to signal usage. The scans relay messages to internal/external stores to ensure the restocking of products. Electronic kanban often uses the Internet as a method of routing messages to external suppliers[24]and as a means to allow a real-time view of inventory, via a portal, throughout the supply chain.
Organizations like theFord Motor Company[25]andBombardier Aerospacehave used electronic kanban systems to improve processes. Systems are now widespread from single solutions or bolt on modules toERP systems.
In a kanban system, adjacent upstream and downstream workstations communicate with each other through their cards, where each container has a kanban associated with it.Economic order quantityis important. The two most important types of kanbans are:
The Kanban philosophy and task boards are also used inagile project managementto coordinate tasks in project teams.[26]An online demonstration can be seen in anagilesimulator.[27]
Implementation of kanban can be described in the following manner:[28]
A third type involvescorporate training. Following the just-in-time principle, computer-based training permits those who need to learn a skill to do so when the need arises, rather than take courses and lose the effectiveness of what they've learned from lack of practice.[29][30]
|
https://en.wikipedia.org/wiki/Kanban
|
Manufacturing process management(MPM) is a collection of technologies and methods used to define how products are to be manufactured. MPM differs fromERP/MRPwhich is used to plan the ordering of materials and other resources, set manufacturing schedules, and compile cost data.[1]
A cornerstone of MPM is the central repository for the integration of all these tools and activities aids in the exploration of alternativeproduction linescenarios; makingassembly linesmore efficient with the aim of reduced lead time to product launch, shorter product times and reducedwork in progress(WIP) inventories as well as allowing rapid response to product or product changes.
|
https://en.wikipedia.org/wiki/Manufacturing_process_management
|
Single-machine schedulingorsingle-resource schedulingis anoptimization problemincomputer scienceandoperations research. We are givennjobsJ1,J2, ...,Jnof varying processing times, which need to be scheduled on a single machine, in a way that optimizes a certain objective, such as thethroughput.
Single-machine scheduling is a special case ofidentical-machines scheduling, which is itself a special case ofoptimal job scheduling. Many problems, which are NP-hard in general, can be solved in polynomial time in the single-machine case.[1]: 10–20
In the standardthree-field notation for optimal job scheduling problems, the single-machine variant is denoted by1in the first field. For example, "1||∑Cj{\displaystyle \sum C_{j}}" is a single-machine scheduling problem with no constraints, where the goal is to minimize the sum of completion times.
The makespan-minimization problem1||Cmax{\displaystyle C_{\max }}, which is a common objective with multiple machines, is trivial with a single machine, since the makespan is always identical. Therefore, other objectives have been studied.[2]
The problem1||∑Cj{\displaystyle \sum C_{j}}aims to minimize the sum of completion times. It can be solved optimally by the Shortest Processing Time First rule (SPT): the jobs are scheduled by ascending order of their processing timepj{\displaystyle p_{j}}.
The problem1||∑wjCj{\displaystyle \sum w_{j}C_{j}}aims to minimize theweightedsum of completion times. It can be solved optimally by the Weighted Shortest Processing Time First rule (WSPT): the jobs are scheduled by ascending order of the ratiopj/wj{\displaystyle p_{j}/w_{j}}.[2]: lecture 1, part 2
The problem1|chains|∑wjCj{\displaystyle \sum w_{j}C_{j}}is a generalization of the above problem for jobs with dependencies in the form of chains. It can also be solved optimally by a suitable generalization of WSPT.[2]: lecture 1, part 3
The problem1||Lmax{\displaystyle L_{\max }}aims to minimize the maximumlateness. For each jobj, there is a due datedj{\displaystyle d_{j}}. If it is completed after its due date, it sufferslatenessdefined asLj:=Cj−dj{\displaystyle L_{j}:=C_{j}-d_{j}}.1||Lmax{\displaystyle L_{\max }}can be solved optimally by the Earliest Due Date First rule (EDD): the jobs are scheduled by ascending order of their deadlinedj{\displaystyle d_{j}}.[2]: lecture 2, part 2
The problem1|prec|hmax{\displaystyle h_{\max }}generalizes the1||Lmax{\displaystyle L_{\max }}in two ways: first, it allows arbitrary precedence constraints on the jobs; second, it allows each job to have an arbitrary cost functionhj, which is a function of its completion time (lateness is a special case of a cost function). The maximum cost can be minimized by a greedy algorithm known asLawler's algorithm.[2]: lecture 2, part 1
The problem1|rj{\displaystyle r_{j}}|Lmax{\displaystyle L_{\max }}generalizes1||Lmax{\displaystyle L_{\max }}by allowing each job to have a differentrelease timeby which it becomes available for processing. The presence of release times means that, in some cases, it may be optimal to leave the machine idle, in order to wait for an important job that is not released yet. Minimizing maximum lateness in this setting is NP-hard. But in practice, it can be solved using abranch-and-boundalgorithm.[2]: lecture 2, part 3
In settings with deadlines, it is possible that, if the job is completed by the deadline, there is a profitpj. Otherwise, there is no profit. The goal is to maximize the profit. Single-machine scheduling with deadlines is NP-hard; Sahni[3]presents both exact exponential-time algorithms and a polynomial-time approximation algorithm.
The problem1||∑Uj{\displaystyle \sum U_{j}}aims to minimize thenumberof late jobs, regardless of the amount of lateness. It can be solved optimally by the Hodgson-Moore algorithm.[4][2]: lecture 3, part 1It can also be interpreted as maximizing the number of jobs that complete on time; this number is called thethroughput.
The problem1||∑wjUj{\displaystyle \sum w_{j}U_{j}}aims to minimize theweightof late jobs. It is NP-hard, since the special case in which all jobs have the same deadline (denoted by1|dj=d{\displaystyle d_{j}=d}|∑wjUj{\displaystyle \sum w_{j}U_{j}}) is equivalent to theKnapsack problem.[2]: lecture 3, part 2
The problem1|rj{\displaystyle r_{j}}|∑Uj{\displaystyle \sum U_{j}}generalizes1||∑Uj{\displaystyle \sum U_{j}}by allowing different jobs to have differentrelease times. The problem is NP-hard. However, when all job lengths are equal, the problem can be solved in polynomial time. It has several variants:
Jobs can haveexecution intervals. For each jobj, there is a processing timetjand a start-timesj, so it must be executed in the interval [sj,sj+tj]. Since some of the intervals overlap, not all jobs can be completed. The goal is to maximize the number of completed jobs, that is, thethroughput. More generally, each job may have several possible intervals, and each interval may be associated with a different profit. The goal is to choose at most one interval for each job, such that the total profit is maximized. For more details, see the page oninterval scheduling.
More generally, jobs can havetime-windows, with both start-times and deadlines, which may be larger than the job length. Each job can be scheduled anywhere within its time-window. Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber[10]present a (1-ε)/2 approximation.
Workers and machines often become tired after working for a certain amount of time, and this makes them slower when processing future jobs. On the other hand, workers and machines may learn how to work better, and this makes them faster when processing future jobs. In both cases, the length (processing-time) of a job is not constant, but depends on the jobs processed before it. In this setting, even minimizing the maximum completion time becomes non-trivial. There are two common ways to model the change in job length.
Cheng and Ding studied makespan minimization and maximum-lateness minimization when the actual length of jobjscheduled at timesjis given by
pj^(sj)=pj−b⋅sj{\displaystyle {\widehat {p_{j}}}(s_{j})=p_{j}-b\cdot s_{j}}, wherepjis the normal length ofj.
They proved the following results:
Kubiak and van-de-Velde[16]studied makespan minimization when the fatigue starts only after a common due-dated. That is, the actual length of jobjscheduled at timesjis given by
pj^(sj)=max(pj,pj+bj⋅(sj−d)){\displaystyle {\widehat {p_{j}}}(s_{j})=\max(p_{j},p_{j}+b_{j}\cdot (s_{j}-d))}.
So, if the job starts befored, its length does not change; if it starts afterd, its length grows by a job-dependent rate. They show that the problem is NP-hard, give a pseudopolynomial algorithm that runs in timeO(nd∑jpj){\displaystyle O(nd\sum _{j}p_{j})}, and give a branch-and-bound algorithm that solves instances with up to 100 jobs in reasonable time. They also study bounded deterioration, wherepjstops growing if the job starts after a common maximum deterioration date D > d. For this case, they give two pseudopolynomial time algorithms.
Cheng, Ding and Lin[11]surveyed several studies of a deterioration effect, where the length of jobjscheduled at timesjis either linear or piecewise linear, and the change rate can be positive or negative.
The aging effect has two types:
Wang, Wang, Wang and Wang[19]studied sum-of-processing-time-based aging model, where the processing-time of jobjscheduled at positionvis given by
pj^(π,v)=pj⋅(1+∑l=1v−1pπ(l))α{\displaystyle {\widehat {p_{j}}}(\pi ,v)=p_{j}\cdot \left(1+\sum _{l=1}^{v-1}p_{\pi (l)}\right)^{\alpha }}
whereπ(l){\displaystyle \pi (l)}is the job scheduled at positionl{\displaystyle l}, and α is the "aging characteristic" of the machine. In this model, the maximum processing time of the permutationπ{\displaystyle \pi }is:
∑l=1npπ(l)^(π,l){\displaystyle \sum _{l=1}^{n}{\widehat {p_{\pi (l)}}}(\pi ,l)}
Rudek[20]generalized the model in two ways: allowing the fatigue to be different than the processing time, and allowing a job-dependent aging characteristic:
pj^(π,v)=pj⋅(1+∑l=1v−1f(pπ(l)))αj{\displaystyle {\widehat {p_{j}}}(\pi ,v)=p_{j}\cdot \left(1+\sum _{l=1}^{v-1}f(p_{\pi (l)})\right)^{\alpha _{j}}}
Here,fis an increasing function that describes the dependance of the fatigue on the processing time; andαjis the aging characteristic of jobj. For this model, he proved the following results:
Many solution techniques have been applied to solving single machine scheduling problems. Some of them are listed below.
|
https://en.wikipedia.org/wiki/Single-machine_scheduling
|
Inproject management, ascheduleis a listing of aproject'smilestones,activities, anddeliverables. Usuallydependenciesand resources are defined for each task, then start and finish dates areestimatedfrom theresource allocation,budget, task duration, and scheduled events. A schedule is commonly used in theproject planningandproject portfolio managementparts ofproject management. Elements on a schedule may be closely related to thework breakdown structure(WBS)terminal elements, theStatement of work, or aContract Data Requirements List.
In many industries, such as engineering and construction, the development and maintenance of the project schedule is the responsibility of a full-time scheduler or team of schedulers, depending on the size and the scope of the project. The techniques of scheduling are well developed[1]but inconsistently applied throughout industry. Standardization and promotion of scheduling best practices are being pursued by theAssociation for the Advancement of Cost Engineering(AACE), theProject Management Institute(PMI),[2]and the US Government for acquisition[3]and accounting[4]purposes.
Project management is not limited to industry; the average person can use it to organize their own life. Some examples are:
Some project management software programs provide templates, lists, and example schedules to help their users with creating their schedule.
The project schedule is a calendar that links the tasks to be done with the resources that will do them. It is the core of the project plan used to show the organization how the work will be done, commit people to the project, determine resource needs, and used as a kind of checklist to make sure that every task necessary is performed. Before a project schedule can be created, the schedule maker should have a work breakdown structure (WBS), an effort estimate for each task, and a resource list with availability for eachresource. If these components for the schedule are not available, they can be created with a consensus-driven estimation method likeWideband Delphi.[5]
To develop a project schedule, the following needs to be completed:[6]
In order for a project schedule to be healthy, the following criteria must be met:[7]
The schedule structure may closely follow and include citations to the index of work breakdown structure or deliverables, using decomposition or templates to describe the activities needed to produce the deliverables defined in the WBS.[8]
A schedule may be assessed for the quality of the schedule development and the quality of the schedule management.[9][10][11]
|
https://en.wikipedia.org/wiki/Schedule_(project_management)
|
Binary Modular Dataflow Machine(BMDFM) is a software package that enables running an application in parallel on shared memorysymmetric multiprocessing(SMP) computers using the multiple processors to speed up the execution of single applications. BMDFM automatically identifies and exploits parallelism due to the static and mainlydynamic schedulingof thedataflowinstruction sequences derived from the formerly sequential program.
The BMDFM dynamic scheduling subsystem performs asymmetric multiprocessing(SMP)emulationof atagged-tokendataflowmachineto provide the transparent dataflow semantics for the applications. No directives for parallel execution are needed.
Current parallel shared memory SMPs are complex machines, where a large number of architectural aspects must be addressed simultaneously to achieve high performance. Recent commodity SMP machines for technical computing can have many tightly coupled cores (good examples are SMP machines based on multi-core processors fromIntel(CoreorXeon) orIBM(Power)). The number of cores per SMP node is planned to double every few years according to computer makers' announcements.
Multi-core processorsare intended to exploit a thread-level parallelism, identified by software. Hence, the most challenging task is to find an efficient way to harness power of multi-core processors for processing an application program in parallel. Existent OpenMP paradigm of the static parallelization with a fork-join runtime library works pretty well for loop-intensive regular array-based computations only, however, compile-time parallelization methods are weak in general and almost inapplicable for irregular applications:
The BMDFM technology mainly uses dynamic scheduling to exploit parallelism of an application program, thus, BMDFM avoids mentioned disadvantages of the compile-time methods.[1][2]BMDFM is a parallel programming environment for multi-core SMP that provides:
BMDFM combines the advantages of known architectural principles into a single hybrid architecture that is able to exploitimplicit parallelismof the applications having negligible dynamic scheduling overhead and no bottlenecks. Mainly, the basic dataflow principle is used. The dataflow principle says: "An instruction or a function can be executed as soon as all its arguments are ready. A dataflow machine manages the tags for every piece of data at runtime. Data is marked with ready tag when data has been computed. Instructions with ready arguments get executed marking their result data ready".
The main feature of BMDFM is to provide a conventional programming paradigm at the top level, so-called transparent dataflow semantics. A user understands BMDFM as avirtual machine(VM), which runs all statements of an application program in parallel, having all parallelizing and synchronizing mechanisms fully transparent. The statements of an application program are normal operators, of which any single threaded program might consist: they include variable assignments, conditional processing, loops, function calls, etc.
Suppose we have the code fragment shown below:
The two first statements are independent, so a dataflow engine of BMDFM can run them on different processors or processor's cores. The two last statements can also run in parallel but only after "a" and "b" are computed. The dataflow engine recognizes dependencies automatically because of its ability to build a dataflow graph dynamically at runtime. Additionally, the dataflow engine correctly orders the output stream to output the results sequentially. Thus even after the out-of-order processing the results will appear in a natural way.
Suppose that above code fragment now is nested in a loop:
The dataflow engine of BMDFM will keep variables "a" and "b" under unique contexts for each iteration. Actually, these are different copies of the variables. A context variable exists until it is referenced by instruction consumers. Later non-referenced contexts will be garbage collected at runtime. Therefore, the dataflow engine can exploit both local parallelism within the iteration and global parallelism as well running multiple iterations simultaneously.
BMDFM is a convenient parallel programming environment and an efficient runtime engine for multi-core SMP due to the MIMD unification of several architectural paradigms (von-Neumann, SMP and dataflow):
BMDFM is intended for use in a role of the parallel runtime engine (instead of conventional fork-join runtime library) able to run irregular applications automatically in parallel. Due to the transparent dataflow semantics on top, BMDFM is a simple parallelization technique for application programmers and, at the same time, is a much better parallel programming and compiling technology for multi-core SMP computers.
The basic concept of BMDFM relies on underlying commodity SMP hardware, which is available on the market. Normally, SMP vendors provide their own SMP Operating System (OS) with an SVR4/POSIX UNIX interface (Linux, HP-UX, SunOS/Solaris, Tru64OSF1, IRIX, AIX, BSD, MacOS, etc.). On top of an SMP OS, the multithreaded dataflow runtime engine performs a software emulation of the dataflow machine. Such a virtual machine has interfaces to the virtual machine language and to C providing the transparent dataflow semantics for conventional programming.
BMDFM is built as a hybrid of several architectural principles:
An application program (input sequential program) is processed in three stages: preliminary code reorganization (code reorganizer), static scheduling of the statements (static scheduler) and compiling/loading (compiler, loader). The output after the static scheduling stages is a multiple clusters flow that feeds the multithreaded engine via the interface designed in a way to avoid bottlenecks. The multiple clusters flow can be thought of as a compiled input program split into marshaled clusters, in which all addresses are resolved and extended with context information. Splitting into marshaled clusters allows loading them multithreadedly. Context information lets iterations be processed in parallel. Listener thread orders the output stream after the out-of-order processing.
The BMDFM dynamic scheduling subsystem is an efficient SMP emulator of the tagged-token dataflow machine. The Shared Memory Pool is divided in three main parts:input/outputring buffer port(IORBP),data buffer(DB), andoperation queue(OQ). Thefront-end controlvirtual machineschedules an input application program statically and puts clustered instructions and data of the input program into the IORBP. The ring buffer service processes (IORBP PROC) move data into the DB and instructions into the OQ. The operation queue service processes (OQ PROC) tag the instructions as ready for execution if the required operands' data is accessible. Execution processes (CPU PROC) execute instructions, which are tagged as ready and output computed data into the DB or to the IORBP. Additionally, IORBP PROC and OQ PROC are responsible for freeing memory after contexts have been processed. The context is a special unique identifier representing a copy of data within different iteration bodies accordingly to the tagged-token dataflow architecture. This allows the dynamic scheduler to handle several iterations in parallel.
Running under an SMP OS, the processes will occupy all available real machine processors and processor cores. In order to allow several processes accessing the same data concurrently, the BMDFM dynamic scheduler locks objects in the shared memory pool via SVR4/POSIX semaphore operations. Locking policy provides multiple read-only access and exclusive access for modification.
Every machine supportingANSI CandPOSIX;UNIX System V(SVR4) may run BMDFM.
BMDFM is provided as full multi-threaded versions for:
|
https://en.wikipedia.org/wiki/Binary_Modular_Dataflow_Machine
|
Cellular multiprocessingis amultiprocessingcomputing architecturedesigned initially forIntelcentral processing unitsfromUnisys, a worldwide information technology consulting company.
It consists of the partitioning of processors into separate computing environments running different operating systems. Providing up to 32 processors that are crossbar connected to 64GB ofmemoryand 96PCI cards, a CMP system providesmainframe-like architecture using Intel CPUs. CMP supportsWindows NTandWindows 2000 Server,AIX,Novell NetWareandUnixWareand can be run as one large SMP system or multiple systems with variantoperating systems.
There is a concept of creating CPU Partitions in CMPs, e.g. one can create a full partition of 32 processors, Or one can create two partitions of 16 processors each, these two partitions will be visible to the OS installed as two machines. Similarly for 32 processors it is possible to create 32 partitions at max each having a single CPU. Unisys' CMP is the only server architecture to take full advantage of Microsoft's Windows 2000 Datacenter Server operating system's support for 32 processors.[1]
In case of LINUX/UNIX OS the CMP technology is proven to be very best, whereas in case of Windows 2003 Servers installations, there are certain limits for partitions having number of CPUs, like for a windows 2003 installation the maximum CPU in a partition can only be 4, if more CPUs are assigned severe performance degrade are observed. Even on 8 CPU partition the performance is comparable to the performance of a 2 processors partition.
A CMP subpod contains four x86 or Itanium CPUs, which connect through a third-levelmemorycacheto the crossbar. Each crossbar supports two subpods, two direct I/O bridges (DIBs) and can connect to four memory storage units (MSUs).[2][full citation needed]
Unisysis also providing CMP server technology toCompaq,Dell,Hewlett-PackardandICL, under OEM agreements.[1]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cellular_multiprocessing
|
Incomputer architecturealocaleis an abstraction of the concept of a localized set of hardware resources which are close enough to enjoy uniform memory access.[1]
For instance, on acomputer clustereach node may be considered a locale given that there is one instance of the operating system and uniform access to memory for processes running on that node. Similarly, on anSMP system, each node may be defined as a locale. Parallel programming languages such asChapelhave specific constructs for declaring locales.[2]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Locale_(computer_hardware)
|
Massively parallelis the term for using a large number ofcomputer processors(or separate computers) to simultaneously perform a set of coordinated computationsin parallel.GPUsare massively parallel architecture with tens of thousands of threads.
One approach isgrid computing, where theprocessing powerof many computers in distributed, diverseadministrative domainsis opportunistically used whenever a computer is available.[1]An example isBOINC, avolunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.[2]
Another approach is grouping many processors in close proximity to each other, as in acomputer cluster. In such a centralized system the speed and flexibility of theinterconnectbecomes very important, and modern supercomputers have used various approaches ranging from enhancedInfiniBandsystems to three-dimensionaltorus interconnects.[3]
The term also applies tomassively parallel processor arrays(MPPAs), a type of integrated circuit with an array of hundreds or thousands ofcentral processing units(CPUs) andrandom-access memory(RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing many processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips.[citation needed]MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.
Goodyear MPPwas an early implementation of a massively parallel computer architecture. MPP architectures are the second most commonsupercomputerimplementations after clusters, as of November 2013.[4]
Data warehouse appliancessuch asTeradata,NetezzaorMicrosoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.
|
https://en.wikipedia.org/wiki/Massively_parallel
|
Simultaneous multithreading(SMT) is a technique for improving the overall efficiency ofsuperscalarCPUswithhardware multithreading. SMT permits multiple independentthreadsof execution to better use the resources provided by modernprocessor architectures.
The termmultithreadingis ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with differentpage tables, differenttask state segments, differentprotection rings, differentI/O permissions, etc.). Although running on the same core, they are completely separated from each other.
Multithreading is similar in concept topreemptive multitaskingbut is implemented at the thread level of execution in modern superscalar processors.
Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form beingtemporal multithreading(also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support many more.[1]
Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measuredenergy efficiencyof SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors.[2]In modern systems, SMT effectively exploits concurrency with very little additional dynamic power. That is, even when performance gains are minimal the power consumption savings can be considerable.[2]Some researchers[who?]have shown that the extra threads can be used proactively to seed ashared resourcelike a cache, to improve the performance of another single thread, and claim this shows that SMT does not only increase efficiency. Others[who?]use SMT to provide redundant computation, for some level of error detection and recovery.
However, in most current cases, SMT is about hidingmemory latency, increasing efficiency, and increasing throughput of computations per amount of hardware used.[citation needed]
In processor design, there are two ways to increase on-chip parallelism with fewer resource requirements: one is superscalar technique which tries to exploitinstruction-level parallelism(ILP); the other is multithreading approach exploitingthread-level parallelism(TLP).
Superscalar means executing multiple instructions at the same time while thread-level parallelism (TLP) executes instructions from multiple threads within one processor chip at the same time. There are many ways to support more than one thread within a chip, namely:
The key factor to distinguish them is to look at how many instructions the processor can issue in one cycle and how many threads from which the instructions come. For example, Sun Microsystems' UltraSPARC T1 is a multicore processor combined with fine-grain multithreading technique instead of simultaneous multithreading because each core can only issue one instruction at a time.
While multithreading CPUs have been around since the 1950s, simultaneous multithreading was first researched by IBM in 1968 as part of theACS-360project.[3]The first major commercial microprocessor developed with SMT was theAlpha 21464(EV8). This microprocessor was developed byDECin coordination with Dean Tullsen of the University of California, San Diego, and Susan Eggers and Henry Levy of the University of Washington. The microprocessor was never released, since the Alpha line of microprocessors was discontinued shortly beforeHPacquiredCompaqwhich had in turn acquiredDEC. Dean Tullsen's work was also used to develop thehyper-threadedversions of the Intel Pentium 4 microprocessors, such as the "Northwood" and "Prescott".
TheIntelPentium 4was the first modern desktop processor to implement simultaneous multithreading, starting from the 3.06 GHz model released in 2002, and since introduced into a number of their processors. Intel calls the functionalityHyper-Threading Technology, and provides a basic two-thread SMT engine. Intel claims up to a 30% speed improvement[4]compared against an otherwise identical, non-SMT Pentium 4. The performance improvement seen is very application-dependent; however, when running two programs that require full attention of the processor it can actually seem like one or both of the programs slows down slightly when Hyper-threading is turned on.[5]This is due to thereplay systemof the Pentium 4 tying up valuable execution resources, increasing contention for resources such as bandwidth, caches,TLBs,re-order bufferentries, and equalizing the processor resources between the two programs which adds a varying amount of execution time. The Pentium 4 Prescott core gained a replay queue, which reduces execution time needed for the replay system. This was enough to completely overcome that performance hit.[6]
The latestImagination TechnologiesMIPS architecturedesigns include an SMT system known as "MIPS MT".[7]MIPS MT provides for both heavyweight virtual processing elements and lighter-weight hardware microthreads.RMI, a Cupertino-based startup, is the first MIPS vendor to provide a processorSOCbased on eight cores, each of which runs four threads. The threads can be run in fine-grain mode where a different thread can be executed each cycle. The threads can also be assigned priorities.Imagination TechnologiesMIPS CPUs have two SMT threads per core.
IBM'sBlue Gene/Q has 4-way SMT.
The IBMPOWER5, announced in May 2004, comes as either a dual core dual-chip module (DCM), or quad-core or oct-core multi-chip module (MCM), with each core including a two-thread SMT engine.IBM's implementation is more sophisticated than the previous ones, because it can assign a different priority to the various threads, is more fine-grained, and the SMT engine can be turned on and off dynamically, to better execute those workloads where an SMT processor would not increase performance. This is IBM's second implementation of generally available hardware multithreading. In 2010, IBM released systems based on the POWER7 processor with eight cores with each having four Simultaneous Intelligent Threads. This switches the threading mode between one thread, two threads or four threads depending on the number of process threads being scheduled at the time. This optimizes the use of the core for minimum response time or maximum throughput. IBMPOWER8has 8 intelligent simultaneous threads per core (SMT8).
IBM Zstarting with thez13processor in 2013 has two threads per core (SMT-2).
Although many people reported thatSun Microsystems' UltraSPARC T1 (known as "Niagara" until its 14 November 2005 release) and the now defunct processorcodenamed"Rock" (originally announced in 2005, but after many delays cancelled in 2010) are implementations ofSPARCfocused almost entirely on exploiting SMT and CMP techniques, Niagara is not actually using SMT. Sun refers to these combined approaches as "CMT", and the overall concept as "Throughput Computing". The Niagara has eight cores, but each core has only one pipeline, so actually it uses fine-grained multithreading. Unlike SMT, where instructions from multiple threads share the issue window each cycle, the processor uses a round robin policy to issue instructions from the next active thread each cycle. This makes it more similar to abarrel processor. Sun Microsystems' Rock processor is different: it has more complex cores that have more than one pipeline.
TheOracle CorporationSPARC T3 has eight fine-grained threads per core; SPARC T4, SPARC T5, SPARC M5, M6 and M7 have eight fine-grained threads per core of which two can be executed simultaneously.
FujitsuSPARC64 VI has coarse-grained Vertical Multithreading (VMT) SPARC VII and newer have 2-way SMT.
IntelItaniumMontecito uses coarse-grained multithreading and Tukwila and newer ones use 2-way SMT (with dual-domain multithreading).
IntelXeon Phihas 4-way SMT (with time-multiplexed multithreading) with hardware-based threads which cannot be disabled, unlike regular Hyper-Threading.[8]TheIntel Atom, first released in 2008, is the first Intel product to feature 2-way SMT (marketed as Hyper-Threading) without supporting instruction reordering, speculative execution, or register renaming. Intel reintroduced Hyper-Threading with theNehalem microarchitecture, after its absence on theCore microarchitecture.
AMDBulldozer microarchitectureFlexFPU and Shared L2 cache are multithreaded but integer cores in module are single threaded, so it is only a partial SMT implementation.[9][10]
AMDZen microarchitecturehas 2-way SMT.
VISC architecture[11][12][13][14]uses theVirtual Software Layer(translation layer) to dispatch a single thread of instructions to theGlobal Front Endwhich splits instructions intovirtual hardware threadletswhich are then dispatched to separate virtual cores. These virtual cores can then send them to the available resources on any of the physical cores. Multiple virtual cores can push threadlets into the reorder buffer of a single physical core, which can split partial instructions and data from multiple threadlets through the execution ports at the same time. Each virtual core keeps track of the position of the relative output. This form of multithreading can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs. Therefore, if two virtual cores are competing for resources, there are appropriate algorithms in place to determine what resources are to be allocated where.
Depending on the design and architecture of the processor, simultaneous multithreading can decrease performance if any of the shared resources are bottlenecks for performance.[15]Critics argue that it is a considerable burden to put on software developers that they have to test whether simultaneous multithreading is good or bad for their application in various situations and insert extra logic to turn it off if it decreases performance. Current operating systems lack convenientAPIcalls for this purpose and for preventing processes with different priority from taking resources from each other.[16]
There is also a security concern with certain simultaneous multithreading implementations. Intel's hyperthreading inNetBurst-based processors has a vulnerability through which it is possible for one application to steal acryptographic keyfrom another application running in the same processor by monitoring its cache use.[17]There are also sophisticated machine learning exploits to HT implementation that were explained atBlack Hat 2018.[18]
|
https://en.wikipedia.org/wiki/Simultaneous_multithreading
|
Xeon Phi[3]is a discontinued series ofx86manycore processorsdesigned and made byIntel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages andapplication programming interfaces(APIs) such asOpenMP.[4][5]
Xeon Phi launched in 2010. Since it was originally based on an earlier GPU design (codenamed "Larrabee") by Intel[6]that was cancelled in 2009,[7]it shared application areas with GPUs. The main difference between Xeon Phi and aGPGPUlikeNvidia Teslawas that Xeon Phi, with an x86-compatible core, could, with less modification, run software that was originally targeted to a standard x86 CPU.
Initially in the form ofPCI Express-based add-on cards, a second-generation product, codenamedKnights Landing, was announced in June 2013.[8]These second-generation chips could be used as a standalone CPU, rather than just as an add-in card.
In June 2013, theTianhe-2supercomputer at theNational Supercomputer Center in Guangzhou(NSCC-GZ) was announced[9]as the world's fastest supercomputer (as of June 2023[update], it is No. 10[10]). It used Intel Xeon Phi coprocessors andIvy Bridge-EP Xeon E5 v2 processors to achieve 33.86 petaFLOPS.[11]
The Xeon Phi product line directly competed withNvidia'sTeslaand AMDRadeon Instinctlines of deep learning and GPGPU cards. It was discontinued due to a lack of demand and Intel's problems with its 10 nm node.[12]
TheLarrabee microarchitecture(in development since 2006[14]) introduced very wide (512-bit)SIMDunits to anx86architecture based processor design, extended to acache-coherentmultiprocessor system connected via a ring bus to memory; each core was capable of four-way multithreading. Due to the design being intended for GPU as well as general purpose computing, the Larrabee chips also included specialised hardware for texture sampling.[15][16]The project to produce a retail GPU product directly from the Larrabee research project was terminated in May 2010.[17]
Another contemporary Intel research project implementing x86 architecture on a many-multicore processor was the 'Single-chip Cloud Computer' (prototype introduced 2009[18]), a design mimicking acloud computingcomputer datacentre on a single chip with multiple independent cores: the prototype design included 48 cores per chip with hardware support for selective frequency and voltage control of cores to maximize energy efficiency, and incorporated amesh networkfor inter-chip messaging. The design lacked cache-coherent cores and focused on principles that would allow the design to scale to many more cores.[19]
TheTeraflops Research Chip(prototype unveiled 2007[20]) is an experimental 80-core chip with twofloating-pointunits per core, implementing a 96-bitVLIWarchitecture instead of the x86 architecture.[21]The project investigated intercore communication methods, per-chip power management, and achieved 1.01TFLOPSat 3.16 GHz consuming 62 W of power.[22][23]
Intel's Many Integrated Core (MIC) prototype board, namedKnights Ferry, incorporating a processor codenamedAubrey Islewas announced 31 May 2010. The product was stated to be a derivative of theLarrabeeproject and other Intel research including theSingle-chip Cloud Computer.[24][25]
The development product was offered as a PCIe card with 32 in-order cores at up to 1.2 GHz with four threads per core, 2 GB GDDR5 memory,[26]and 8 MB coherent L2 cache (256 KB per core with 32 KB L1 cache), and a power requirement of ≈300 W,[26]built at a 45 nm process.[27]In theAubrey Islecore a 1,024-bit ring bus (512-bit bi-directional) connects processors to main memory.[28]Single-board performance has exceeded 750 GFLOPS.[27]The prototype boards only supportsingle-precisionfloating-point instructions.[29]
Initial developers includedCERN,Korea Institute of Science and Technology Information(KISTI) andLeibniz Supercomputing Centre. Hardware vendors for prototype boards included IBM, SGI, HP, Dell and others.[30]
TheKnights Cornerproduct line is made at a 22 nm process size, using Intel'sTri-gatetechnology with more than 50 cores per chip, and is Intel's first many-cores commercial product.[24][27]
In June 2011,SGIannounced a partnership with Intel to use the MIC architecture in its high-performance computing products.[31]In September 2011, it was announced that theTexas Advanced Computing Center(TACC) will use Knights Corner cards in their 10-petaFLOPS "Stampede" supercomputer, providing 8 petaFLOPS of compute power.[32]According to "Stampede: A Comprehensive Petascale Computing Environment" the "second-generation Intel (Knights Landing) MICs will be added when they become available, increasing Stampede's aggregate peak performance to at least 15 PetaFLOPS."[33]
On 15 November 2011, Intel showed an early silicon version of a Knights Corner processor.[34][35]
On 5 June 2012, Intel released open source software and documentation regarding Knights Corner.[36]
On 18 June 2012, Intel announced at the 2012 HamburgInternational Supercomputing ConferencethatXeon Phiwill be thebrand nameused for all products based on their Many Integrated Core architecture.[3][37][38][39][40][41][42]In June 2012,Crayannounced it would be offering 22 nm 'Knight's Corner' chips (branded as 'Xeon Phi') as a co-processor in its 'Cascade' systems.[43][44]
In June 2012, ScaleMP announced a virtualization update allowing Xeon Phi as a transparent processor extension, allowing legacyMMX/SSEcode to run without code changes.[45]An important component of the Intel Xeon Phi coprocessor's core is its vector processing unit (VPU).[46]The VPU features a novel 512-bit SIMD instruction set, officially known as Intel Initial Many Core Instructions (Intel IMCI). Thus, the VPU can execute 16single-precision(SP) or 8double-precision(DP) operations per cycle. The VPU also supports Fused Multiply-Add (FMA) instructions and hence can execute 32 SP or 16 DP floating point operations per cycle. It also provides support for integers.
The VPU also features an Extended Math Unit (EMU) that can execute operations such as reciprocal, square root, and logarithm, thereby allowing these operations to be executed in a vector fashion with high bandwidth. The EMU operates by calculating polynomial approximations of these functions.
On 12 November 2012, Intel announced two Xeon Phi coprocessor families using the 22 nm process size: the Xeon Phi 3100 and the Xeon Phi 5110P.[47][48][49]The Xeon Phi 3100 will be capable of more than 1 teraFLOPS ofdouble-precisionfloating-point instructions with 240 GB/s memory bandwidth at 300 W.[47][48][49]The Xeon Phi 5110P will be capable of 1.01 teraFLOPS of double-precision floating-point instructions with 320 GB/s memory bandwidth at 225 W.[47][48][49]The Xeon Phi 7120P will be capable of 1.2 teraFLOPS of double-precision floating-point instructions with 352 GB/s memory bandwidth at 300 W.
On 17 June 2013, theTianhe-2supercomputer was announced[9]byTOP500as the world's fastest. Tianhe-2 used Intel Ivy Bridge Xeon and Xeon Phi processors to achieve 33.86 petaFLOPS. It was the fastest on the list for two and a half years, lastly in November 2015.[50]
The cores of Knights Corner are based on a modified version ofP54Cdesign, used in the original Pentium.[51]The basis of the Intel MIC architecture is to leverage x86 legacy by creating an x86-compatible multiprocessor architecture that can use existing parallelization software tools.[27]Programming tools includeOpenMP,[52]OpenCL,[53]Cilk/Cilk Plusand specialised versions of Intel's Fortran, C++[54]and math libraries.[55]
Design elements inherited from the Larrabee project include x86 ISA, 4-waySMTper core, 512-bit SIMD units, 32 KB L1 instruction cache, 32 KB L1 data cache, coherent L2 cache (512 KB per core[56]), and ultra-wide ring bus connecting processors and memory.
The Knights Corner 512-bit SIMD instructions share many intrinsic functions with AVX-512 extension . The instruction set documentation is available from Intel under the extension name of KNC.[57][58][59][60]
Code name for the second-generation MIC architecture product from Intel.[33]Intel officially first revealed details of its second-generation Intel Xeon Phi products on 17 June 2013.[11]Intel said that the next generation of Intel MIC Architecture-based products will be available in two forms, as a coprocessor or a host processor (CPU), and be manufactured using Intel's14 nmprocess technology. Knights Landing products will include integrated on-package memory for significantly higher memory bandwidth.
Knights Landing contains up to 72Airmont(Atom) cores with four threads per core,[75][76]usingLGA 3647socket[77]supporting up to 384 GB of "far" DDR4 2133 RAM and 8–16 GB of stacked "near" 3DMCDRAM, a version of theHybrid Memory Cube. Each core has two 512-bit vector units and supportsAVX-512SIMD instructions, specifically the Intel AVX-512 Foundational Instructions (AVX-512F) with Intel AVX-512 Conflict Detection Instructions (AVX-512CD), Intel AVX-512 Exponential and Reciprocal Instructions (AVX-512ER), and Intel AVX-512 Prefetch Instructions (AVX-512PF). Support for IMCI has been removed in favor of AVX-512.[78]
TheNational Energy Research Scientific Computing Centerannounced that Phase 2 of its newest supercomputing system "Cori" would use Knights Landing Xeon Phi coprocessors.[79]
On 20 June 2016, Intel launched the Intel Xeon Phi product family x200 based on the Knights Landing architecture, stressing its applicability to not just traditional simulation workloads, but also tomachine learning.[80][81]The model lineup announced at launch included only Xeon Phi of bootable form-factor, but two versions of it: standard processors and processors with integrated IntelOmni-Patharchitecture fabric.[82]The latter is denoted by the suffix F in the model number. Integrated fabric is expected to provide better latency at a lower cost than discrete high-performance network cards.[80]
On 14 November 2016, the 48th list ofTOP500contained two systems using Knights Landing in the Top 10.[83]
ThePCIebased co-processor variant of Knight's Landing was never offered to the general market and was discontinued by August 2017.[84]This included the 7220A, 7240P and 7220P coprocessor cards.
Intel announced they were discontinuing Knights Landing in summer 2018.[85]
All models can boost to their peak speeds, adding 200 MHz to their base frequency when running just one or two cores. When running from three to the maximum number of cores, the chips can only boost 100 MHz above the base frequency. All chips run high-AVX code at a frequency reduced by 200 MHz.[86]
Knights Mill is Intel's codename for a Xeon Phi product specialized indeep learning,[99]initially released in December 2017.[100]Nearly identical in specifications to Knights Landing, Knights Mill includes optimizations for better utilization of AVX-512 instructions. Single-precision and variable-precision floating-point performance increased, at the expense of double-precision floating-point performance.
Knights Hill was the codename for the third-generation MIC architecture, for which Intel announced the first details at SC14.[101]It was to be manufactured in a 10 nm process.[102]
Knights Hill was expected to be used in theUnited States Department of EnergyAurora supercomputer, to be deployed atArgonne National Laboratory.[103][104]However, Aurora was delayed in favor of using an "advanced architecture" with a focus on machine learning.[105][106]
In 2017, Intel announced that Knights Hill had been canceled in favor of another architecture built from the ground up to enableExascale computingin the future. This new architecture was expected for 2020–2021;[107][108]however, this was also cancelled due to the discontinuation of the Xeon Phi.
One performance and programmability study reported that achieving high performance with Xeon Phi still needs help from programmers and that merely relying on compilers with traditional programming models is insufficient.[109]Other studies in various domains, such as life sciences[110]and deep learning,[111]have shown that exploiting the thread- and SIMD-parallelism of Xeon Phi achieves significant speed-ups.
|
https://en.wikipedia.org/wiki/Xeon_Phi
|
The3B series computers[1][2]are a line of minicomputers[3]made between the late 1970s and 1993 byAT&T Computer Systems'Western Electricsubsidiary, for use with the company'sUNIXoperating system. The line primarily consists of the models 3B20, 3B5, 3B15, 3B2, and 3B4000. The series is notable for controlling a series ofelectronic switching systemsfortelecommunications, for general computing purposes, and for serving as the historical software porting base for commercial UNIX.
The first 3B20D was installed inFresno, Californiaat Pacific Bell in 1981.[4]Within two years, several hundred were in place throughout theBell System. Some of the units came with "small, slow hard disks".[5]
The general purpose family of 3B computer systems includes the 3B2, 3B5, 3B15, 3B20S, and 3B4000. They run the AT&TUNIXoperating system and were named after the successful 3B20D High Availability processor.
In 1984, after regulatory constraints were lifted, AT&T introduced the 3B20D, 3B20S, 3B5, and 3B2 to the general computer market,[1][6]a move that some commentators saw as an attempt to compete withIBM.[7]In Europe, the 3B computers were distributed by Italian firmOlivetti, in which AT&T had a minority shareholding.[6][7]After AT&T bought NCR Corporation, effective January 1992, the computers were marketed through NCR sales channels.[8]
Having produced 70,000 units, the AT&T Oklahoma City plant stopped manufacturing 3B machines at the end of 1993, with the 3B20D to be the last units manufactured.[8]
The original series of 3B computers includes the models 3B20C, 3B20Dsuperminicomputer,[1]3B21D, and 3B21E.
These systems are 32-bitmicroprogrammedduplex (redundant)high availabilityprocessor units running areal-time operating system. They were first produced in the late 1970s at theWestern Electricfactory inLisle, Illinois, for telecommunications applications including the4ESSand5ESSsystems.
They use the Duplex Multi Environment Real Time (DMERT) operating system which was renamedUNIX-RTR(Real Time Reliable) in 1982. The Data Manipulation Unit (DMU) provides arithmetic and logic operations on 32-bit words using eightAMD 29014-bit-sliceALUs.[9]The first 3B20D is called the Model 1. Each processor's control unit consists of two frames of circuit packs. The whole duplex system requires seven-foot frames of circuit packs plus at least one tape drive frame (most telephone companies at that time wrote billing data onmagnetic tapes), and manywashing machine-sized disk drives. For training and lab purposes, a 3B20D can be divided into two "half-duplex" systems. A 3B20S consists of most of the same hardware as a half-duplex but uses a completely different operating system.
The 3B20C was briefly available as a high-availability fault tolerantmultiprocessinggeneral-purpose computer in the commercial market in 1984. The 3B20E was created to provide a cost-reduced 3B20D for small offices that did not expect such high availability. It consists of a virtual "emulated" 3B20D environment running on a stand-alone general purpose computer; the system was ported to many computers, but primarily runs on theSun MicrosystemsSolarisenvironment.
There were improvements to the 3B20D UNIX-RTR system in both software and hardware in the 1980s, 1990s, 2000s, and 2010s. Innovations included disk independent operation (DIOP: the ability to continue essential software processing such as telecommunications after duplex failure of redundant essential disks); off-line boot (the ability to split in half and boot the out-of-service half, typically on a new software release) and switch forward (switch processing to the previously out-of-service half); upgrading the disks tosolid-state drive(SSD); and upgrading the tape unit toCompactFlash.
The processor was re-engineered and renamed in 1992 as the 3B21D. It is still in use as of 2023[update]as a component ofNokiaproducts such as the2STPsignal transfer pointand the 4ESS and 5ESS switches, which Nokia inherited from AT&T spin-offLucent Technologies.
The 3B20S (simplex) was developed atBell Labsand produced by Western Electric in 1982 for general purpose internalBell Systemuse. The 3B20S[1]has hardware similar to the 3B20D, but one unit instead of two. The machine is approximately the size of a largerefrigerator, requiring a minimum of 170 square feet floor space.[10]It was in use at the1984 Summer Olympics, where around twelve 3B20S served theemailrequirements of theElectronic Messaging System, which was built to replace the man-based messaging system of earlier Olympiads. The system connected around 1800 user terminals and 200 printers.[11]The3B20Ais an enhanced version of the 3B20S, adding in a second processing unit working in parallel as a multiprocessor unit.
The 3B5 is built with the older Western ElectricWE 3200032-bit microprocessor. The initial versions have discrete memory management unit hardware using gate arrays, and support segment-based memory translation. I/O is programmed using memory-mapped techniques. The machine is approximately the size of adishwasher, though adding the reel-to-reel tape drive increases its size. These computers useSMDhard drives.
The 3B15, introduced in 1985,[12]uses the WE 32100 and is the faster follow-on to the 3B5 with similar large form factor.
The 3B4000 is ahigh availabilityserver introduced in 1987[13]and based on a 'snugly-coupled' architecture using the WE series 32x00 32-bit processor. Known internally as 'Apache', the 3B4000 is a follow-on to the 3B15 and initial revisions use a 3B15 as a master processor. Developed in the mid-1980's at theLisleIndian Hill West facility by the High Performance Computer Development Lab, the system consists of multiple high performance (at the time) processor boards – adjunct processing elements (APEs) and adjunct communication elements (ACEs). These adjunct processors run a customized UNIX kernel with drivers for SCSI (APEs) and serial boards (ACEs). The processing boards are interconnected by a redundant low latency parallel bus (ABUS) running at 20 MB/s. The UNIX kernels running on the adjunct processors are modified to allow the fork/exec of processes across processing units. The system calls and peripheral drivers are also extended to allow processes to access remote resources across the ABUS. Since the ABUS is hot-swappable, processors can be added or replaced without shutting down the system. If one of the adjunct processors fails during operation, the system can detect and restart programs that were running on the failed element.
The 3B4000 is capable of significant expansion; one test system (including storage) occupies 17 mid-height cabinets. Generally, the performance of the system increases linearly with additional processing elements, however the lack of a trueshared memorycapability requires rewriting applications that rely heavily on this feature to avoid a severe performance penalty.
The 3B2 was introduced in 1984 using theWE 3200032-bit microprocessor at 8 MHz with memory management chips that supportsdemand paging. Uses include theSwitching Control Center System. The 3B2 Model 300, which can support up to 18 users,[1]is approximately 4 inches (100 mm) high and the 3B2 Model 400 is approximately 8 inches (200 mm) high.
The 300 was soon supplanted by the 3B2/310 running at 10 MHz, which features the WE 32100 CPU as do later models. The Model 400, introduced in 1985,[12]allows more peripheral slots and more memory, and has a built-in 23 MBQICtape drive managed by afloppy disk controller(nicknamed the "floppy tape"). These three models use standardMFM5+1⁄4" hard disk drives.
There are also Model 100 and Model 200 3B2 systems.[1]
The 3B2/600,[3]running at 18 MHz, offers an improvement in performance and capacity: it features aSCSIcontroller for the 60 MB QIC tape and two internal full-height disk drives. The 600 is approximately twice as tall as a 400, and is oriented with the tape and floppy disk drives opposite the backplane (instead of at a right angle to it as on the 3xx, 4xx and later 500 models). Early models use an internalEmulexcard to interface the SCSI controller with ESDI disks, with later models using SCSI drives directly.
The 3B2/500 was the next model to appear, essentially a 3B2/600 with enough components removed to fit into a 400 case; one internal disk drive and several backplane slots are sacrificed in this conversion. Unlike the 600, which because of its two large fans is loud, the 500 is tolerable in an office environment, like the 400.[citation needed]
The 3B2/700[14]is an uprated version of the 600 featuring a slightly faster processor (WE 32200 at 22 MHz), and the 3B2/1000[2]is an additional step in this direction (WE 32200 at 24 MHz).
Officially named theAT&T UNIX PC,[15]AT&T introduced adesktop computerin 1985 that is often dubbed the3B1. However, thisworkstationis unrelated in hardware to the 3B line, and is based on theMotorola 68010microprocessor. It runs a derivative of Unix System V Release 2 byConvergent Technologies. The system, which is also known as thePC-7300, is tailored for use as a productivity tool in office environments and as an electronic communication center.[15]
|
https://en.wikipedia.org/wiki/3B20C
|
Inoperating systems, agiant lock, also known as abig-lockorkernel-lock, is alockthat may be used in thekernelto provideconcurrency controlrequired bysymmetric multiprocessing(SMP) systems.
A giant lock is a solitary global lock that is held whenever athreadenterskernel spaceand released when the thread returns touser space; asystem callis the archetypal example. In this model, threads in user space can run concurrently on any availableprocessorsorprocessor cores, but no more than one thread can run in kernel space; any other threads that try to enter kernel space are forced to wait. In other words, the giant lock eliminates allconcurrencyin kernel space.
By isolating the kernel from concurrency, many parts of the kernel no longer need to be modified to support SMP. However, as in giant-lock SMP systems only one processor can run the kernel code at a time, performance for applications spending significant amounts of time in the kernel is not much improved.[1]Accordingly, the giant-lock approach is commonly seen as a preliminary means of bringing SMP support to an operating system, yielding benefits only in user space. Most modern operating systems use afine-grained lockingapproach.
TheLinux kernelhad a big kernel lock (BKL) since the introduction of SMP, untilArnd Bergmannremoved it in 2011 in kernel version 2.6.39,[2][3]with the remaining uses of the big lock removed or replaced by finer-grained locking.Linux distributionsat or aboveCentOS 7,Debian 7 (Wheezy)andUbuntu 11.10are therefore not using BKL.
As of September 2022[update], Linux kernel still hasconsole_lockandrtnl_lock, which are sometimes referred as BKL, and its removal is in progress.[4][5][6][7]
As of July 2019[update],OpenBSDandNetBSDare still using thesplfamily of primitives to facilitate synchronisation of critical sections within the kernel,[8][9][10]meaning that many system calls may inhibit SMP capabilities of the system, and, according toMatthew Dillon, the SMP capabilities of these two systems cannot be considered modern.[11]
FreeBSD still has support forthe Giant mutex,[12]which provides semantics akin to the old spl interface, but performance-critical core components have long been converted to use finer-grained locking.[1]
It is claimed byMatthew Dillonthat out of theopen-source softwaregeneral-purpose operating systems, onlyLinux,DragonFly BSDandFreeBSDhave modern SMP support, withOpenBSDandNetBSDfalling behind.[11]
TheNetBSDFoundation views modern SMP support as vital to the direction of The NetBSD Project, and has offered grants to developers willing to work on SMP improvements;NPF (firewall)was one of the projects that arose as a result of these financial incentives, but further improvements to the core networking stack may still be necessary.[9][13]
|
https://en.wikipedia.org/wiki/Giant_lock
|
Heterogeneous computingrefers to systems that use more than one kind of processor orcore. These systems gain performance orenergy efficiencynot just by adding the same type of processors, but by adding dissimilarcoprocessors, usually incorporating specialized processing capabilities to handle particular tasks.[1]
Usually heterogeneity in the context of computing refers to differentinstruction-set architectures(ISA), where the main processor has one and other processors have another - usually a very different - architecture (maybe more than one), not just a differentmicroarchitecture(floating pointnumber processing is a special case of this - not usually referred to as heterogeneous).
In the past heterogeneous computing meant different ISAs had to be handled differently, while in a modern example,Heterogeneous System Architecture(HSA) systems[2]eliminate the difference (for the user) while using multiple processor types (typicallyCPUsandGPUs), usually on the sameintegrated circuit, to provide the best of both worlds: general GPU processing (apart from the GPU's well-known 3D graphics rendering capabilities, it can also perform mathematically intensive computations on very large data-sets), while CPUs can run the operating system and perform traditional serial tasks.
The level of heterogeneity in modern computing systems is gradually increasing as further scaling of fabrication technologies allows for formerly discrete components to become integrated parts of asystem-on-chip, or SoC.[citation needed]For example, many new processors now include built-in logic for interfacing with other devices (SATA,PCI,Ethernet,USB,RFID,radios,UARTs, andmemory controllers), as well as programmable functional units andhardware accelerators(GPUs,cryptographyco-processors, programmable network processors, A/V encoders/decoders, etc.).
Recent findings show that a heterogeneous-ISA chip multiprocessor that exploits diversity offered by multiple ISAs can outperform the best same-ISA homogeneous architecture by as much as 21% with 23% energy savings and a reduction of 32% inEnergy Delay Product(EDP).[3]AMD's 2014 announcement on its pin-compatible ARM and x86 SoCs, codename Project Skybridge,[4]suggested a heterogeneous-ISA (ARM+x86) chip multiprocessor in the making.[citation needed]
A system withheterogeneous CPU topologyis a system where the same ISA is used, but the cores themselves are different in speed.[5]The setup is more similar to asymmetric multiprocessor. (Although such systems are technicallyasymmetric multiprocessors, the cores do not differ in roles or device access.) There are typically two types of cores: a higher performance core usually known as a "big" or P-core and a more power efficient core usually known as a "small" or E-core. The terms P- and E-cores are usually used in relation to Intel's implementation of hetereogeneous computing, while the terms big and little cores are usually used in relation to the ARM architecture. Some processors have three categories of core, prime, performance and efficiency cores, with prime cores having higher performance than performance cores; a prime core is known as "big", a performance core is known as "medium", and an efficiency core is known as "small".[6]
A common use of such topology is to provide better power efficiency, especially in mobile SoCs.
Heterogeneous computing systems present new challenges not found in typical homogeneous systems.[8]The presence of multiple processing elements raises all of the issues involved with homogeneous parallel processing systems, while the level of heterogeneity in the system can introduce non-uniformity in system development, programming practices, and overall system capability. Areas of heterogeneity can include:[9]
Heterogeneous computing hardware can be found in every domain of computing—from high-end servers and high-performance computing machines all the way down to low-power embedded devices including mobile phones and tablets.
|
https://en.wikipedia.org/wiki/Heterogeneous_computing
|
Amulti-core processor(MCP) is amicroprocessoron a singleintegrated circuit(IC) with two or more separatecentral processing units(CPUs), calledcoresto emphasize their multiplicity (for example,dual-coreorquad-core). Each core reads and executesprogram instructions,[1]specifically ordinaryCPU instructions(such as add, move data, and branch). However, the MCP can run instructions on separate cores at the same time, increasing overall speed for programs that supportmultithreadingor otherparallel computingtechniques.[2]Manufacturers typically integrate the cores onto a single ICdie, known as achip multiprocessor(CMP), or onto multiple dies in a singlechip package. As of 2024, the microprocessors used in almost all newpersonal computersare multi-core.
A multi-core processor implementsmultiprocessingin a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not sharecaches, and they may implementmessage passingorshared-memoryinter-core communication methods. Commonnetwork topologiesused to interconnect cores includebus,ring, two-dimensionalmesh, andcrossbar. Homogeneous multi-core systems include only identical cores;heterogeneousmulti-core systems have cores that are not identical (e.g.big.LITTLEhave heterogeneous cores that share the sameinstruction set, whileAMD Accelerated Processing Unitshave cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such asVLIW,superscalar,vector, ormultithreading.
Multi-core processors are widely used across many application domains, includinggeneral-purpose,embedded,network,digital signal processing(DSP), andgraphics(GPU). Core count goes up to even dozens, and for specialized chips over 10,000,[3]and insupercomputers(i.e. clusters of chips) the count can go over 10 million (and inone caseup to 20 million processing elements total in addition to host processors).[4]
The improvement in performance gained by the use of a multi-core processor depends very much on thesoftwarealgorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that canrun in parallelsimultaneously on multiple cores; this effect is described byAmdahl's law. In the best case, so-calledembarrassingly parallelproblems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core's cache(s), avoiding use of much slower main-system memory. Most applications, however, are not accelerated as much unless programmers invest effort inrefactoring.[5]
The parallelization of software is a significant ongoing topic of research. Cointegration of multiprocessor applications provides flexibility in network architecture design. Adaptability within parallel models is an additional feature of systems utilizing these protocols.[6]
In the consumer market, dual-core processors (that is, microprocessors with two units) started becoming commonplace on personal computers in the late 2000s.[7]Quad-core processors were also being adopted in that era for higher-end systems before becoming standard. In the late 2010s, hexa-core (six cores) started entering the mainstream[8]and since the early 2020s has overtaken quad-core in many spaces.[9]
The termsmulti-coreanddual-coremost commonly refer to some sort ofcentral processing unit(CPU), but are sometimes also applied todigital signal processors(DSP) andsystem on a chip(SoC). The terms are generally used only to refer to multi-core microprocessors that are manufactured on thesameintegrated circuitdie; separate microprocessor dies in the same package are generally referred to by another name, such asmulti-chip module. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on thesameintegrated circuit, unless otherwise noted.
In contrast to multi-core systems, the termmulti-CPUrefers to multiple physically separate processing-units (which often contain special circuitry to facilitate communication between each other).
The termsmany-coreandmassively multi-coreare sometimes used to describe multi-core architectures with an especially high number of cores (tens to thousands[10]).[11]
Some systems use manysoft microprocessorcores placed on a singleFPGA. Each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core.[citation needed]
While manufacturing technology improves, reducing the size of individual gates, physical limits ofsemiconductor-basedmicroelectronicshave become a major design concern. These physical limitations can cause significant heat dissipation and data synchronization problems. Various other methods are used to improve CPU performance. Someinstruction-level parallelism(ILP) methods such assuperscalarpipeliningare suitable for many applications, but are inefficient for others that contain difficult-to-predict code. Many applications are better suited tothread-level parallelism(TLP) methods, and multiple independent CPUs are commonly used to increase a system's overall TLP. A combination of increased available space (due to refined manufacturing processes) and the demand for increased TLP led to the development of multi-core CPUs.
In the 1990s,Kunle Olukotunled the Stanford Hydra Chip Multiprocessor (CMP) research project. This initiative was among the first to demonstrate the viability of integrating multiple processors on a single chip, a concept that laid the groundwork for today's multicore processors. The Hydra project introduced support for thread-level speculation (TLS), enabling more efficient parallel execution of programs.
Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit (IC), which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality, especially forcomplex instruction set computing(CISC) architectures.Clock ratesalso increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s.
As the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance. Multiple cores were used on the same CPU chip, which could then lead to better sales of CPU chips with two or more cores. For example,Intelhas produced a 48-core processor for research in cloud computing; each core has anx86architecture.[12][13]
Since computer manufacturers have long implementedsymmetric multiprocessing(SMP) designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known.
Additionally:
In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such asIntelandAMDhave turned to multi-core designs, sacrificing lower manufacturing-costs for higher performance in some applications and systems. Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is the further integration of peripheral functions into the chip.
The proximity of multiple CPU cores on the same die allows thecache coherencycircuitry to operate at a much higher clock rate than what is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance ofcache snoop(alternative:Bus snooping) operations. Put simply, this means thatsignalsbetween different CPUs travel shorter distances, and therefore those signalsdegradeless. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.
Assuming that the die can physically fit into the package, multi-core CPU designs require much lessprinted circuit board(PCB) space than do multi-chipSMPdesigns. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to thefront-side bus(FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider-core design. Also, adding more cache suffers from diminishing returns.
Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in a multi-core CPU is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows higher performance with less energy. A challenge in this, however, is the additional overhead of writing parallel code.[15]
Maximizing the usage of the computing resources provided by multi-core processors requires adjustments both to theoperating system(OS) support and to existing application software. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications.
Integration of a multi-core chip can lower the chip production yields. They are also more difficult to manage thermally than lower-density single-core designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core ones on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core CPU. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage.
The trend in processor development has been towards an ever-increasing number of cores, as processors with hundreds or even thousands of cores become theoretically possible.[16]In addition, multi-core chips mixed withsimultaneous multithreading, memory-on-chip, and special-purpose"heterogeneous"(or asymmetric) cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. For example, abig.LITTLEcore includes a high-performance core (called 'big') and a low-power core (called 'LITTLE'). There is also a trend towards improving energy-efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grainpower managementand dynamicvoltageandfrequency scaling(i.e.laptopcomputers andportable media players).
Chips designed from the outset for a large number of cores (rather than having evolved from single core designs) are sometimes referred to asmanycoredesigns, emphasising qualitative differences.
The composition and balance of the cores in multi-core architecture show great variety. Some architectures use one core design repeated consistently ("homogeneous"), while others use a mixture of different cores, each optimized for a different, "heterogeneous" role.
How multiple cores are implemented and integrated significantly affects both the developer's programming skills and the consumer's expectations of apps and interactivity versus the device.[17]A device advertised as being octa-core will only have independent cores if advertised asTrue Octa-core, or similar styling, as opposed to being merely two sets of quad-cores each with fixed clock speeds.[18][19]
The article "CPU designers debate multi-core future" by Rick Merritt, EE Times 2008,[20]includes these comments:
Chuck Moore [...] suggested computers should be like cellphones, using a variety of specialty cores to run modular software scheduled by a high-level applications programming interface.
[...] Atsushi Hasegawa, a senior chief engineer atRenesas, generally agreed. He suggested the cellphone's use of many specialty cores working in concert is a good model for future multi-core designs.
[...]Anant Agarwal, founder and chief executive of startupTilera, took the opposing view. He said multi-core chips need to be homogeneous collections of general-purpose cores to keep the software model simple.
An outdated version of an anti-virus application may create a new thread for a scan process, while itsGUIthread waits for commands from the user (e.g. cancel the scan). In such cases, a multi-core architecture is of little benefit for the application itself due to the single thread doing all the heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interweaving of processing on data shared between threads (seethread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level demand for maximum use of computer hardware. Also, serial tasks like decoding theentropy encodingalgorithms used invideo codecsare impossible to parallelize because each result generated is used to help create the next result of the entropy decoding algorithm.
Given the increasing emphasis on multi-core chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.
The telecommunications market had been one of the first that needed a new design of parallel datapath packet processing because there was a very quick adoption of these multiple-core processors for the datapath and the control plane. These MPUs are going to replace[21]the traditional Network Processors that were based on proprietarymicrocodeorpicocode.
Parallel programmingtechniques can benefit from multiple cores directly. Some existingparallel programming modelssuch asCilk Plus,OpenMP,OpenHMPP,FastFlow, Skandium,MPI, andErlangcan be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism calledTBB. Other research efforts include theCodeplay Sieve System, Cray'sChapel, Sun'sFortress, and IBM'sX10.
Multi-core processing has also affected the ability of modern computational software development. Developers programming in newer languages might find that their modern languages do not support multi-core functionality. This then requires the use ofnumerical librariesto access code written in languages likeCandFortran, which perform math computations faster[citation needed]than newer languages likeC#. Intel's MKL and AMD'sACMLare written in these native languages and take advantage of multi-core processing. Balancing the application workload across processors can be problematic, especially if they have different performance characteristics. There are different conceptual models to deal with the problem, for example using a coordination language and program building blocks (programming libraries or higher-order functions). Each block can have a different native implementation for each processor type. Users simply program using these abstractions and an intelligent compiler chooses the best implementation based on the context.[22]
Managingconcurrencyacquires a central role in developing parallel applications. The basic steps in designing parallel applications are:
On the other hand, on theserver side, multi-core processors are ideal because they allow many users to connect to a site simultaneously and have independentthreadsof execution. This allows for Web servers and application servers that have much betterthroughput.
Vendors may license some software "per processor". This can give rise to ambiguity, because a "processor" may consist either of a single core or of a combination of cores.
Embedded computingoperates in an area of processor technology distinct from that of "mainstream" PCs. The same technological drives towards multi-core apply here too. Indeed, in many cases the application is a "natural" fit for multi-core technologies, if the task can easily be partitioned between the different processors.
In addition, embedded software is typically developed for a specific hardware release, making issues ofsoftware portability, legacy code or supporting independent developers less critical than is the case for PC or enterprise computing. As a result, it is easier for developers to adopt new technologies and as a result there is a greater variety of multi-core processing architectures and suppliers.
As of 2010[update], multi-corenetwork processorshave become mainstream, with companies such asFreescale Semiconductor,Cavium Networks,WintegraandBroadcomall manufacturing products with eight processors. For the system developer, a key challenge is how to exploit all the cores in these devices to achieve maximum networking performance at the system level, despite the performance limitations inherent in asymmetric multiprocessing(SMP) operating system. Companies such as6WINDprovide portable packet processing software designed so that the networking data plane runs in a fast path environment outside the operating system of the network device.[25]
Indigital signal processingthe same trend applies:Texas Instrumentshas the three-core TMS320C6488 and four-core TMS320C5441,Freescalethe four-core MSC8144 and six-core MSC8156 (and both have stated they are working on eight-core successors). Newer entries include the Storm-1 family fromStream Processors, Incwith 40 and 80 general purpose ALUs per chip, all programmable in C as a SIMD engine andPicochipwith 300 processors on a single die, focused on communication applications.
Inheterogeneous computing, where a system uses more than one kind of processor or cores, multi-core solutions are becoming more common:XilinxZynq UltraScale+ MPSoC has a quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5. Software solutions such as OpenAMP are being used to help with inter-processor communication.
Mobile devices may use theARM big.LITTLEarchitecture.
The research and development of multicore processors often compares many options, and benchmarks are developed to help such evaluations. Existing benchmarks include SPLASH-2, PARSEC, and COSMIC for heterogeneous systems.[49]
|
https://en.wikipedia.org/wiki/Multi-core_(computing)
|
CPU shieldingis a practice where on a multiprocessor system or on a CPU with multiple cores,real-timetasks can run on one CPU or core while non-real-time tasks run on another.
Theoperating systemmust be able to set aCPU affinityfor bothprocessesandinterrupts.
InLinuxin order to shieldCPUsfrom individual interrupts being serviced on them you have to make sure that the followingkernelconfiguration parameter is set:
Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/CPU_shielding
|
Incomputing,CUDA(ComputeUnifiedDeviceArchitecture) is a proprietary[2]parallel computingplatform andapplication programming interface(API) that allows software to use certain types ofgraphics processing units(GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs. CUDA was created byNvidiain 2006.[3]When it was first introduced, the name was an acronym forCompute Unified Device Architecture,[4]but Nvidia laterdroppedthe common use of the acronym and now rarely expands it.[5]
CUDA is a software layer that gives direct access to the GPU's virtualinstruction setand parallel computational elements for the execution ofcompute kernels.[6]In addition todriversand runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
CUDA is designed to work with programming languages such asC,C++,Fortran,PythonandJulia. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs likeDirect3DandOpenGL, which require advanced skills in graphics programming.[7]CUDA-powered GPUs also support programming frameworks such asOpenMP,OpenACCandOpenCL.[8][6]
The graphics processing unit (GPU), as a specialized computer processor, addresses the demands ofreal-timehigh-resolution3D graphicscompute-intensive tasks. By 2012, GPUs had evolved into highly parallelmulti-coresystems allowing efficient manipulation of large blocks of data. This design is more effective than general-purposecentral processing unit(CPUs) foralgorithmsin situations where processing large blocks of data is done in parallel, such as:
Ian Buck, while at Stanford in 2000, created an 8K gaming rig using 32 GeForce cards, then obtained aDARPAgrant to performgeneral purpose parallel programming on GPUs. He then joined Nvidia, where since 2004 he has been overseeing CUDA development. In pushing for CUDA,Jensen Huangaimed for the Nvidia GPUs to become a general hardware for scientific computing. CUDA was released in 2007. Around 2015, the focus of CUDA changed to neural networks.[9]
The following table offers a non-exact description for theontologyof CUDA framework.
The CUDA platform is accessible to software developers through CUDA-accelerated libraries,compiler directivessuch asOpenACC, and extensions to industry-standard programming languages includingC,C++,FortranandPython. C/C++ programmers can use 'CUDA C/C++', compiled toPTXwithnvcc, Nvidia'sLLVM-based C/C++ compiler, or by clang itself.[10]Fortran programmers can use 'CUDA Fortran', compiled with the PGI CUDA Fortran compiler fromThe Portland Group.[needs update]Python programmers can use the cuNumeric library to accelerate applications on Nvidia GPUs.
In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including theKhronos Group'sOpenCL,[11]Microsoft'sDirectCompute,OpenGLCompute Shader andC++ AMP.[12]Third party wrappers are also available forPython,Perl, Fortran,Java,Ruby,Lua,Common Lisp,Haskell,R,MATLAB,IDL,Julia, and native support inMathematica.
In thecomputer gameindustry, GPUs are used for graphics rendering, and forgame physics calculations(physical effects such as debris, smoke, fire, fluids); examples includePhysXandBullet. CUDA has also been used to accelerate non-graphical applications incomputational biology,cryptographyand other fields by anorder of magnitudeor more.[13][14][15][16][17]
CUDA provides both a low levelAPI(CUDADriverAPI, non single-source) and a higher level API (CUDARuntimeAPI, single-source). The initial CUDASDKwas made public on 15 February 2007, forMicrosoft WindowsandLinux.Mac OS Xsupport was later added in version 2.0,[18]which supersedes the beta released February 14, 2008.[19]CUDA works with all Nvidia GPUs from the G8x series onwards, includingGeForce,Quadroand theTeslaline. CUDA is compatible with most standard operating systems.
CUDA 8.0 comes with the following libraries (for compilation & runtime, in alphabetical order):
CUDA 8.0 comes with these other software components:
CUDA 9.0–9.2 comes with these other components:
CUDA 10 comes with these other components:
CUDA 11.0–11.8 comes with these other components:[20][21][22][23]
CUDA has several advantages over traditional general-purpose computation on GPUs (GPGPU) using graphics APIs:
This example code inC++loads a texture from an image into an array on the GPU:
Below is an example given inPythonthat computes the product of two arrays on the GPU. The unofficial Python language bindings can be obtained fromPyCUDA.[36]
Additional Python bindings to simplify matrix multiplication operations can be found in the programpycublas.[37]
whileCuPydirectly replaces NumPy:[38]
Supported CUDA compute capability versions for CUDA SDK version and microarchitecture (by code name):
Note: CUDA SDK 10.2 is the last official release for macOS, as support will not be available for macOS in newer releases.
CUDA compute capability by version with associated GPU semiconductors and GPU card models (separated by their various application areas):
GV11B[54][55]
* –OEM-only products
[58]
Note: Any missing lines or empty entries do reflect some lack of information on that exact item.[59]
Note: Any missing lines or empty entries do reflect some lack of information on that exact item.[63][64][65][66][67][68]
[76][77][78][79]
[81]
[82]
For more information read the Nvidia CUDA C++ Programming Guide.[115]
CUDA competes with other GPU computing stacks:Intel OneAPIandAMD ROCm.
Whereas Nvidia's CUDA is closed-source, Intel's OneAPI and AMD's ROCm are open source.
oneAPIis an initiative based in open standards, created to support software development for multiple hardware architectures.[118]The oneAPI libraries must implement open specifications that are discussed publicly by the Special Interest Groups, offering the possibility for any developer or organization to implement their own versions of oneAPI libraries.[119][120]
Originally made by Intel, other hardware adopters include Fujitsu and Huawei.
Unified Acceleration Foundation (UXL) is a new technology consortium working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal is to offer open alternatives to Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[121]
ROCm[122]is an open source software stack forgraphics processing unit(GPU) programming fromAdvanced Micro Devices(AMD).
|
https://en.wikipedia.org/wiki/CUDA
|
General-purpose computing on graphics processing units(GPGPU, or less oftenGPGP) is the use of agraphics processing unit(GPU), which typically handles computation only forcomputer graphics, to perform computation in applications traditionally handled by thecentral processing unit(CPU).[1][2][3][4]The use of multiplevideo cardsin one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.[5]
Essentially, a GPGPUpipelineis a kind ofparallel processingbetween one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number ofcores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a largespeedup.
GPGPU pipelines were developed at the beginning of the 21st century forgraphics processing(e.g. for bettershaders). These pipelines were found to fitscientific computingneeds well, and have since been developed in this direction.
The best-known GPGPUs areNvidia Teslathat are used forNvidia DGX, alongsideAMD Instinctand Intel Gaudi.
In principle, any arbitraryBoolean function, including addition, multiplication, and other mathematical functions, can be built up from afunctionally completeset of logic operators. In 1987,Conway's Game of Lifebecame one of the first examples of general-purpose computing using an earlystream processorcalled ablitterto invoke a special sequence oflogical operationson bit vectors.[6]
General-purpose computing on GPUs became more practical and popular after about 2001, with the advent of both programmableshadersandfloating pointsupport on graphics processors. Notably, problems involvingmatricesand/orvectors– especially two-, three-, or four-dimensional vectors – were easy to translate to a GPU, which acts with native speed and support on those types. A significant milestone for GPGPU was the year 2003 when two research groups independently discovered GPU-based approaches for the solution of general linear algebra problems on GPUs that ran faster than on CPUs.[7][8]These early efforts to use GPUs as general-purpose processors required reformulating computational problems in terms of graphics primitives, as supported by the two major APIs for graphics processors,OpenGLandDirectX. This cumbersome translation was obviated by the advent of general-purpose programming languages and APIs such asSh/RapidMind,Brookand Accelerator.[9][10][11]
These were followed by Nvidia'sCUDA, which allowed programmers to ignore the underlying graphical concepts in favor of more commonhigh-performance computingconcepts.[12]Newer, hardware-vendor-independent offerings include Microsoft'sDirectComputeand Apple/Khronos Group'sOpenCL.[12]This means that modern GPGPU pipelines can leverage the speed of a GPU without requiring full and explicit conversion of the data to a graphical form.
Mark Harris, the founder of GPGPU.org, claims he coined the termGPGPU.[13]
Any language that allows the code running on the CPU to poll a GPUshaderfor return values, can create a GPGPU framework. Programming standards for parallel computing includeOpenCL(vendor-independent),OpenACC,OpenMPandOpenHMPP.
As of 2016[update], OpenCL is the dominant open general-purpose GPU computing language, and is an open standard defined by theKhronos Group.[citation needed]OpenCL provides across-platformGPGPU platform that additionally supports data parallel compute on CPUs. OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. The Khronos Group has also standardised and implementedSYCL, a higher-level programming model forOpenCLas a single-source domain specific embedded language based on pure C++11.
The dominant proprietary framework isNvidiaCUDA.[14]Nvidia launched CUDA in 2006, asoftware development kit(SDK) andapplication programming interface(API) that allows using the programming languageCto code algorithms for execution onGeForce 8 seriesand later GPUs.
ROCm, launched in 2016, is AMD's open-source response to CUDA. It is, as of 2022, on par with CUDA with regards to features,[citation needed]and still lacking in consumer support.[citation needed]
OpenVIDIA was developed atUniversity of Torontobetween 2003–2005,[15]in collaboration with Nvidia.
Altimesh Hybridizer created byAltimeshcompilesCommon Intermediate Languageto CUDA binaries.[16][17]It supports generics and virtual functions.[18]Debugging and profiling is integrated withVisual Studioand Nsight.[19]It is available as a Visual Studio extension on Visual Studio Marketplace.
Microsoftintroduced theDirectComputeGPU computing API, released with theDirectX 11API.
Alea GPU,[20]created by QuantAlea,[21]introduces native GPU computing capabilities for the Microsoft .NET languagesF#[22]andC#. Alea GPU also provides a simplified GPU programming model based on GPU parallel-for and parallel aggregate using delegates and automatic memory management.[23]
MATLABsupports GPGPU acceleration using theParallel Computing ToolboxandMATLAB Distributed Computing Server,[24]and third-party packages likeJacket.
GPGPU processing is also used to simulateNewtonian physicsbyphysics engines,[25]and commercial implementations includeHavok Physics, FXandPhysX, both of which are typically used for computer andvideo games.
C++ Accelerated Massive Parallelism (C++ AMP) is a library that accelerates execution ofC++code by exploiting the data-parallel hardware on GPUs.
Due to a trend of increasing power of mobile GPUs, general-purpose programming became available also on the mobile devices running majormobile operating systems.
GoogleAndroid4.2 enabled runningRenderScriptcode on the mobile device GPU.[26]Renderscript has since been deprecated in favour of first OpenGL compute shaders[27]and later Vulkan Compute.[28]OpenCL is available on many Android devices, but is not officially supported by Android.[29]Appleintroduced the proprietaryMetalAPI foriOSapplications, able to execute arbitrary code through Apple's GPU compute shaders.[citation needed]
Computervideo cardsare produced by various vendors, such asNvidia,AMD. Cards from such vendors differ on implementing data-format support, such asintegerandfloating-pointformats (32-bit and 64-bit).Microsoftintroduced aShader Modelstandard, to help rank the various features of graphic cards into a simple Shader Model version number (1.0, 2.0, 3.0, etc.).
Pre-DirectX 9 video cards only supportedpalettedor integer color types. Sometimes another alpha value is added, to be used for transparency. Common formats are:
For earlyfixed-functionor limited programmability graphics (i.e., up to and including DirectX 8.1-compliant GPUs) this was sufficient because this is also the representation used in displays. This representation does have certain limitations. Given sufficient graphics processing power even graphics programmers would like to use better formats, such asfloating pointdata formats, to obtain effects such ashigh-dynamic-range imaging. Many GPGPU applications require floating point accuracy, which came with video cards conforming to the DirectX 9 specification.
DirectX 9 Shader Model 2.x suggested the support of two precision types: full and partial precision. Full precision support could either be FP32 or FP24 (floating point 32- or 24-bit per component) or greater, while partial precision was FP16.ATI'sRadeon R300series of GPUs supported FP24 precision only in the programmable fragment pipeline (although FP32 was supported in the vertex processors) whileNvidia'sNV30series supported both FP16 and FP32; other vendors such asS3 GraphicsandXGIsupported a mixture of formats up to FP24.
The implementations of floating point on Nvidia GPUs are mostlyIEEEcompliant; however, this is not true across all vendors.[30]This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs. Some GPU architectures sacrifice IEEE compliance, while others lack double-precision. Efforts have occurred to emulate double-precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computing onto the GPU in the first place.[31]
Most operations on the GPU operate in a vectorized fashion: one operation can be performed on up to four values at once. For example, if one color⟨R1, G1, B1⟩is to be modulated by another color⟨R2, G2, B2⟩, the GPU can produce the resulting color⟨R1*R2, G1*G2, B1*B2⟩in one operation. This functionality is useful in graphics because almost every basic data type is a vector (either 2-, 3-, or 4-dimensional).[citation needed]Examples include vertices, colors, normal vectors, and texture coordinates. Many other applications can put this to good use, and because of their higher performance, vector instructions, termed single instruction, multiple data (SIMD), have long been available on CPUs.[citation needed]
Originally, data was simply passed one-way from acentral processing unit(CPU) to agraphics processing unit(GPU), then to adisplay device. As time progressed, however, it became valuable for GPUs to store at first simple, then complex structures of data to be passed back to the CPU that analyzed an image, or a set of scientific-data represented as a 2D or 3D format that a video card can understand. Because the GPU has access to every draw operation, it can analyze data in these forms quickly, whereas a CPU must poll every pixel or data element much more slowly, as the speed of access between a CPU and its larger pool ofrandom-access memory(or in an even worse case, ahard drive) is slower than GPUs and video cards, which typically contain smaller amounts of more expensive memory that is much faster to access. Transferring the portion of the data set to be actively analyzed to that GPU memory in the form of textures or other easily readable GPU forms results in speed increase. The distinguishing feature of a GPGPU design is the ability to transfer informationbidirectionallyback from the GPU to the CPU; generally the data throughput in both directions is ideally high, resulting in amultipliereffect on the speed of a specific high-usealgorithm.
GPGPU pipelines may improve efficiency on especially large data sets and/or data containing 2D or 3D imagery. It is used in complex graphics pipelines as well asscientific computing; more so in fields with large data sets likegenome mapping, or where two- or three-dimensional analysis is useful – especially at presentbiomoleculeanalysis,proteinstudy, and other complexorganic chemistry. An example of such applications isNVIDIA software suite for genome analysis.
Such pipelines can also vastly improve efficiency inimage processingandcomputer vision, among other fields; as well asparallel processinggenerally. Some very heavily optimized pipelines have yielded speed increases of several hundred times the original CPU-based pipeline on one high-use task.
A simple example would be a GPU program that collects data about averagelightingvalues as it renders some view from either a camera or a computer graphics program back to the main program on the CPU, so that the CPU can then make adjustments to the overall screen view. A more advanced example might useedge detectionto return both numerical information and a processed image representing outlines to acomputer visionprogram controlling, say, a mobile robot. Because the GPU has fast and local hardware access to everypixelor other picture element in an image, it can analyze and average it (for the first example) or apply aSobel edge filteror otherconvolutionfilter (for the second) with much greater speed than a CPU, which typically must access slowerrandom-access memorycopies of the graphic in question.
GPGPU is fundamentally a software concept, not a hardware concept; it is a type ofalgorithm, not a piece of equipment. Specialized equipment designs may, however, even further enhance the efficiency of GPGPU pipelines, which traditionally perform relatively few algorithms on very large amounts of data. Massively parallelized, gigantic-data-level tasks thus may be parallelized even further via specialized setups such as rack computing (many similar, highly tailored machines built into arack), which adds a third layer – many computing units each using many CPUs to correspond to many GPUs. SomeBitcoin"miners" used such setups for high-quantity processing.
Historically, CPUs have used hardware-managedcaches, but the earlier GPUs only provided software-managed local memories. However, as GPUs are being increasingly used for general-purpose applications, state-of-the-art GPUs are being designed with hardware-managed multi-level caches which have helped the GPUs to move towards mainstream computing. For example,GeForce 200 seriesGT200 architecture GPUs did not feature an L2 cache, theFermiGPU has 768 KiB last-level cache, theKeplerGPU has 1.5 MiB last-level cache,[32]theMaxwellGPU has 2 MiB last-level cache, and thePascalGPU has 4 MiB last-level cache.
GPUs have very largeregister files, which allow them to reduce context-switching latency. Register file size is also increasing over different GPU generations, e.g., the total register file size on Maxwell (GM200), Pascal and Volta GPUs are 6 MiB, 14 MiB and 20 MiB, respectively.[33][34]By comparison, the size of aregister file on CPUsis small, typically tens or hundreds of kilobytes.
The high performance of GPUs comes at the cost of high power consumption, which under full load is in fact as much power as the rest of the PC system combined.[35]The maximum power consumption of the Pascal series GPU (Tesla P100) was specified to be 250W.[36]
Before CUDA was published in 2007, GPGPU was "classical" and involved repurposing graphics primitives. A standard structure of such was:
More examples are available in part 4 ofGPU Gems 2.[37]
Using GPU for numerical linear algebra began at least in 2001.[38]It had been used for Gauss-Seidel solver, conjugate gradients, etc.[39]
GPUs are designed specifically for graphics and thus are very restrictive in operations and programming. Due to their design, GPUs are only effective for problems that can be solved usingstream processingand the hardware can only be used in certain ways.
The following discussion referring to vertices, fragments and textures concerns mainly the legacy model of GPGPU programming, where graphics APIs (OpenGLorDirectX) were used to perform general-purpose computation. With the introduction of theCUDA(Nvidia, 2007) andOpenCL(vendor-independent, 2008) general-purpose computing APIs, in new GPGPU codes it is no longer necessary to map the computation to graphics primitives. The stream processing nature of GPUs remains valid regardless of the APIs used. (See e.g.,[40])
GPUs can only process independent vertices and fragments, but can process many of them in parallel. This is especially effective when the programmer wants to process many vertices or fragments in the same way. In this sense, GPUs are stream processors – processors that can operate in parallel by running one kernel on many records in a stream at once.
Astreamis simply a set of records that require similar computation. Streams provide data parallelism.Kernelsare the functions that are applied to each element in the stream. In the GPUs,verticesandfragmentsare the elements in streams and vertex and fragment shaders are the kernels to be run on them.[dubious–discuss]For each element we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable.[vague]
Arithmetic intensity is defined as the number of operations performed per word of memory transferred. It is important for GPGPU applications to have high arithmetic intensity else the memory access latency will limit computational speedup.[41]
Ideal GPGPU applications have large data sets, high parallelism, and minimal dependency between data elements.
There are a variety of computational resources available on the GPU:
In fact, a program can substitute a write only texture for output instead of the framebuffer. This is done either throughRender to Texture(RTT), Render-To-Backbuffer-Copy-To-Texture (RTBCTT), or the more recent stream-out.
The most common form for a stream to take in GPGPU is a 2D grid because this fits naturally with the rendering model built into GPUs. Many computations naturally map into grids: matrix algebra, image processing, physically based simulation, and so on.
Since textures are used as memory, texture lookups are then used as memory reads. Certain operations can be done automatically by the GPU because of this.
Compute kernelscan be thought of as the body ofloops. For example, a programmer operating on a grid on the CPU might have code that looks like this:
On the GPU, the programmer only specifies the body of the loop as the kernel and what data to loop over by invoking geometry processing.
In sequential code it is possible to control the flow of the program using if-then-else statements and various forms of loops. Such flow control structures have only recently been added to GPUs.[42]Conditional writes could be performed using a properly crafted series of arithmetic/bit operations, but looping and conditional branching were not possible.
Recent[when?]GPUs allow branching, but usually with a performance penalty. Branching should generally be avoided in inner loops, whether in CPU or GPU code, and various methods, such as static branch resolution, pre-computation, predication, loop splitting,[43]and Z-cull[44]can be used to achieve branching when hardware support does not exist.
The map operation simply applies the given function (the kernel) to every element in the stream. A simple example is multiplying each value in the stream by a constant (increasing the brightness of an image). The map operation is simple to implement on the GPU. The programmer generates a fragment for each pixel on screen and applies a fragment program to each one. The result stream of the same size is stored in the output buffer.
Some computations require calculating a smaller stream (possibly a stream of only one element) from a larger stream. This is called a reduction of the stream. Generally, a reduction can be performed in multiple steps. The results from the prior step are used as the input for the current step and the range over which the operation is applied is reduced until only one stream element remains.
Stream filtering is essentially a non-uniform reduction. Filtering involves removing items from the stream based on some criteria.
The scan operation, also termedparallel prefix sum, takes in a vector (stream) of data elements and an(arbitrary) associative binary function '+' with an identity element 'i'. If the input is [a0, a1, a2, a3, ...], anexclusive scanproduces the output [i, a0, a0 + a1, a0 + a1 + a2, ...], while aninclusive scanproduces the output [a0, a0 + a1, a0 + a1 + a2, a0 + a1 + a2 + a3, ...] anddoes not require an identityto exist. While at first glance the operation may seem inherently serial, efficient parallel scan algorithms are possible and have been implemented on graphics processing units. The scan operation has uses in e.g., quicksort and sparse matrix-vector multiplication.[40][45][46][47]
Thescatteroperation is most naturally defined on the vertex processor. The vertex processor is able to adjust the position of thevertex, which allows the programmer to control where information is deposited on the grid. Other extensions are also possible, such as controlling how large an area the vertex affects.
The fragment processor cannot perform a direct scatter operation because the location of each fragment on the grid is fixed at the time of the fragment's creation and cannot be altered by the programmer. However, a logical scatter operation may sometimes be recast or implemented with another gather step. A scatter implementation would first emit both an output value and an output address. An immediately following gather operation uses address comparisons to see whether the output value maps to the current output slot.
In dedicatedcompute kernels, scatter can be performed by indexed writes.
Gatheris the reverse of scatter. After scatter reorders elements according to a map, gather can restore the order of the elements according to the map scatter used. In dedicated compute kernels, gather may be performed by indexed reads. In other shaders, it is performed with texture-lookups.
The sort operation transforms an unordered set of elements into an ordered set of elements. The most common implementation on GPUs is usingradix sortfor integer and floating point data and coarse-grainedmerge sortand fine-grainedsorting networksfor general comparable data.[48][49]
The search operation allows the programmer to find a given element within the stream, or possibly find neighbors of a specified element. Mostly the search method used isbinary searchon sorted elements.
A variety of data structures can be represented on the GPU:
The following are some of the areas where GPUs have been used for general purpose computing:
GPGPU usage in Bioinformatics:[65][90]
† Expected speedups are highly dependent on system configuration. GPU performance compared against multi-corex86CPU socket. GPU performance benchmarked on GPU supported features and may be akernelto kernel performance comparison. For details on configuration used, view application website. Speedups as per Nvidia in-house testing or ISV's documentation.
‡ Q=Quadro GPU, T=Tesla GPU. Nvidia recommended GPUs for this application. Check with developer or ISV to obtain certification information.
|
https://en.wikipedia.org/wiki/GPGPU
|
Hyper-threading(officially calledHyper-Threading TechnologyorHT Technologyand abbreviated asHTTorHT) isIntel'sproprietarysimultaneous multithreading(SMT) implementation used to improveparallelization of computations(doing multiple tasks at once) performed onx86microprocessors. It was introduced onXeonserverprocessorsin February 2002 and onPentium 4desktop processors in November 2002.[4]Since then, Intel has included this technology inItanium,Atom, andCore 'i' SeriesCPUs, among others.[5]
For eachprocessor corethat is physically present, theoperating systemaddresses two virtual (logical) cores and shares the workload between them when possible. The main function of hyper-threading is to increase the number of independent instructions in the pipeline; it takes advantage ofsuperscalararchitecture, in which multiple instructions operate on separate datain parallel. With HTT, one physical core appears as two processors to the operating system, allowingconcurrentscheduling of two processes per core. In addition, two or more processes can use the same resources: If resources for one process are not available, then another process can continue if its resources are available.
In addition to requiring simultaneous multithreading support in the operating system, hyper-threading can be properly utilized only with an operating system specifically optimized for it.[6]
Hyper-Threading Technology is a form of simultaneousmultithreadingtechnology introduced by Intel, while the concept behind the technology has been patented bySun Microsystems. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per core, each of which has its own processor architectural state. Each logical processor can be individually halted, interrupted or directed to execute a specified thread, independently from the other logical processor sharing the same physical core.[8]
Unlike a traditional dual-processor configuration that uses two separate physical processors, the logical processors in a hyper-threaded core share the execution resources. These resources include the execution engine, caches, and system bus interface; the sharing of resources allows two logical processors to work with each other more efficiently, and allows a logical processor to borrow resources from a stalled logical core (assuming both logical cores are associated with the same physical core). A processor stalls when it must wait for data it has requested, in order to finish processing the present thread. The degree of benefit seen when using a hyper-threaded, or multi-core, processor depends on the needs of the software, and how well it and the operating system are written to manage the processor efficiently.[8]
Hyper-threading works by duplicating certain sections of the processor—those that store thearchitectural state—but not duplicating the mainexecution resources. This allows a hyper-threading processor to appear as the usual "physical" processor plus an extra "logical" processor to the host operating system (HTT-unaware operating systems see two "physical" processors), allowing the operating system to schedule two threads or processes simultaneously and appropriately. When execution resources in a hyper-threaded processor are not in use by the current task, and especially when the processor is stalled, those execution resources can be used to execute another scheduled task. (The processor may stall due to acache miss,branch misprediction, ordata dependency.)[9]
This technology is transparent to operating systems and programs. The minimum that is required to take advantage of hyper-threading issymmetric multiprocessing(SMP) support in the operating system, since the logical processors appear no different to the operating system than physical processors.
It is possible to optimize operating system behavior on multi-processor, hyper-threading capable systems. For example, consider an SMP system with two physical processors that are both hyper-threaded (for a total of four logical processors). If the operating system's threadscheduleris unaware of hyper-threading, it will treat all four logical processors the same. If only two threads are eligible to run, it might choose to schedule those threads on the two logical processors that happen to belong to the same physical processor. That processor would be extremely busy, and would share execution resources, while the other processor would remain idle, leading to poorer performance than if the threads were scheduled on different physical processors. This problem can be avoided by improving the scheduler to treat logical processors differently from physical processors, which is, in a sense, a limited form of the scheduler changes required forNUMAsystems.
The first published paper describing what is now known as hyper-threading in a general purpose computer was written by Edward S. Davidson and Leonard. E. Shar in 1973.[10]
Denelcor, Inc.introducedmulti-threadingwith theHeterogeneous Element Processor(HEP) in 1982. The HEP pipeline could not hold multiple instructions from the same process. Only one instruction from a given process was allowed to be present in the pipeline at any point in time. Should an instruction from a given process block the pipe, instructions from other processes would continue after the pipeline drained.
US patent for the technology behind hyper-threading was granted to Kenneth Okin atSun Microsystemsin November 1994. At that time,CMOSprocess technology was not advanced enough to allow for a cost-effective implementation.[11]
Intel implemented hyper-threading on an x86 architecture processor in 2002 with the Foster MP-basedXeon. It was also included on the 3.06 GHz Northwood-based Pentium 4 in the same year, and then remained as a feature in every Pentium 4 HT, Pentium 4 Extreme Edition and Pentium Extreme Edition processor since. The Intel Core & Core 2 processor lines (2006) that succeeded the Pentium 4 model line didn't utilize hyper-threading. The processors based on theCore microarchitecturedid not have hyper-threading because the Core microarchitecture was a descendant of the olderP6 microarchitecture. The P6 microarchitecture was used in earlier iterations of Pentium processors, namely, thePentium Pro,Pentium IIandPentium III(plus theirCeleron&Xeonderivatives at the time).Windows 2000SP3 andWindows XP SP1have added support for hyper-threading.
Intel released theNehalem microarchitecture(Core i7) in November 2008, in which hyper-threading made a return. The first generation Nehalem processors contained four physical cores and effectively scaled to eight threads. Since then, both two- and six-core models have been released, scaling four and twelve threads respectively.[12]EarlierIntel Atomcores were in-order processors, sometimes with hyper-threading ability, for low power mobile PCs and low-price desktop PCs.[13]TheItanium9300 launched with eight threads per processor (two threads per core) through enhanced hyper-threading technology. The next model, the Itanium 9500 (Poulson), features a 12-wide issue architecture, with eight CPU cores with support for eight more virtual cores via hyper-threading.[14]The Intel Xeon 5500 server chips also utilize two-way hyper-threading.[15][16]
According to Intel, the first hyper-threading implementation used only 5% moredie areathan the comparable non-hyperthreaded processor, but the performance was 15–30% better.[17][18]Intel claims up to a 30% performance improvement compared with an otherwise identical, non-simultaneous multithreading Pentium 4.Tom's Hardwarestates: "In some cases a P4 running at 3.0 GHz with HT on can even beat a P4 running at 3.6 GHz with HT turned off."[19]Intel also claims significant performance improvements with a hyper-threading-enabled Pentium 4 processor in some artificial-intelligence algorithms.
Overall the performance history of hyper-threading was a mixed one in the beginning. As one commentary on high-performance computing from November 2002 notes:[20]
Hyper-Threading can improve the performance of someMPIapplications, but not all. Depending on the cluster configuration and, most importantly, the nature of the application running on the cluster, performance gains can vary or even be negative. The next step is to use performance tools to understand what areas contribute to performance gains and what areas contribute to performance degradation.
As a result, performance improvements are very application-dependent;[21]however, when running two programs that require full attention of the processor, it can actually seem like one or both of the programs slows down slightly when Hyper-Threading Technology is turned on.[22]This is due to thereplay systemof the Pentium 4 tying up valuable execution resources, equalizing the processor resources between the two programs, which adds a varying amount of execution time. The Pentium 4 "Prescott" and the Xeon "Nocona" processors received a replay queue that reduces execution time needed for the replay system and completely overcomes the performance penalty.[23]
According to a November 2009 analysis by Intel, performance impacts of hyper-threading result in increased overall latency in case the execution of threads does not result in significant overall throughput gains, which vary[21]by the application. In other words, overall processing latency is significantly increased due to hyper-threading, with the negative effects becoming smaller as there are more simultaneous threads that can effectively use the additional hardware resource utilization provided by hyper-threading.[24]A similar performance analysis is available for the effects of hyper-threading when used to handle tasks related to managing network traffic, such as for processinginterrupt requestsgenerated bynetwork interface controllers(NICs).[25]Another paper claims no performance improvements when hyper-threading is used for interrupt handling.[26]
When the first HT processors were released, many operating systems were not optimized for hyper-threading technology (e.g. Windows 2000 and Linux older than 2.4).[27]
In 2006, hyper-threading was criticised for energy inefficiency.[28]For example,ARM(a specialized, low-power, CPU design company), stated that simultaneous multithreading can use up to 46% more power than ordinary dual-core designs. Furthermore, they claimed that SMT increasescache thrashingby 42%, whereasdual coreresults in a 37% decrease.[29]
In 2010, ARM said it might include simultaneous multithreading in its future chips;[30]however, this was rejected in favor of their 2012 64-bit design.[31]ARM produced SMT cores in 2018.[32]
In 2013, Intel dropped SMT in favor ofout-of-order executionfor itsSilvermontprocessor cores, as they found this gave better performance with better power efficiency than a lower number of cores with SMT.[33]
In 2017, it was revealed that Intel'sSkylakeandKaby Lakeprocessors had a bug in their implementation of hyper-threading that could cause data loss.[34]Microcodeupdates were later released to address the issue.[35]
In 2019, withCoffee Lake, Intel temporarily moved away from including hyper-threading in mainstream Core i7 desktop processors except for highest-end Core i9 parts or Pentium Gold CPUs.[36]It also began to recommend disabling hyper-threading, asnew CPU vulnerabilityattacks were revealed which could be mitigated by disabling HT.[37]
In May 2005,Colin Percivaldemonstrated that a malicious thread on a Pentium 4 can use a timing-basedside-channel attackto monitor thememory access patternsof another thread with which it shares a cache, allowing the theft of cryptographic information. This is not actually atiming attack, as the malicious thread measures the time of only its own execution. Potential solutions to this include the processor changing its cache eviction strategy or the operating system preventing the simultaneous execution, on the same physical core, of threads with different privileges.[38]In 2018 theOpenBSDoperating system has disabled hyper-threading "in order to avoid data potentially leaking from applications to other software" caused by theForeshadow/L1TFvulnerabilities.[39][40]In 2019 aset of vulnerabilitiesled to security experts recommending the disabling of hyper-threading on all devices.[41]
|
https://en.wikipedia.org/wiki/Hyper-threading
|
TheMulticore Associationwas founded in 2005. Multicore Association is a member-funded,non-profit,industryconsortiumfocused on the creation ofopen standardAPIs,specifications, and guidelines that allow system developers andprogrammersto more readily adopt multicore technology into theirapplications.
Theconsortiumprovides a neutral forum for vendors and developers who are interested in, working with, and/or proliferating multicore-related products, includingprocessors, infrastructure, devices, software, and applications. Its members represent vendors ofprocessors,operating systems,compilers,developmenttools,debuggers,ESL/EDAtools, and simulators; and application and systemdevelopers.
In 2008, theMulticore Communications APIworking group released the consortium's first specification, referred to asMCAPI. MCAPI is amessage-passingAPI that captures the basic elements of communication and synchronization that are required for closely distributed (multiple cores on a chip and/or chips on acircuit board) embedded systems. The target systems for MCAPI span multiple dimensions ofheterogeneity(e.g., core heterogeneity,interconnect fabricheterogeneity, memory heterogeneity,operating systemheterogeneity, softwaretoolchainheterogeneity, and programming language heterogeneity).
In 2011, the MCAPI working group released MCAPI 2.0. The enhanced version adds new features, such as domains for routing purposes. MCAPI Version 2.0 adds a level of hierarchy into that network of nodes through the introduction of "domains". Domains can be used in a variety of implementation-specific ways, such as for representing all the cores on a given chip or for dividing a topology into public and secure areas. MCAPI 2.0 also adds three new types of initialization parameters (node attributes, implementation-specific configurations, implementation information such as the initial network topology or the MCAPI version being executed). The MCAPI WG is chaired by Sven Brehmer.
In 2011, theMulticore Resource Management APIworking group released its first specification, referred to asMRAPI. MRAPI is an industry-standard API that specifies essential application-level resource management capabilities. Multicore applications require this API to allow coordinated concurrent access to system resources in situations where: (1) there are not enough resources to dedicate to individualtasksor processors, and/or (2) theRun time (program lifecycle phase)system does not provide a uniformly accessible mechanism for coordinating resource sharing. This API is applicable to both SMP and AMP embedded multicore implementations (whereby AMP refers to heterogeneous both in terms of software and hardware). MRAPI (in conjunction with other Multicore Association APIs) can serve as a valuable tool for implementing applications, as well as for implementing such full-featured resource managers and other types of layered services. The MRAPI WG was chaired by Jim Holt.
In 2013, theMulticore Task Management API(MTAPI) working group released its first specification. MTAPI is a standard specification for an application program interface (API) that supports the coordination of tasks on embedded parallel systems with homogeneous and heterogeneous cores. Core features of MTAPI are runtime scheduling and mapping of tasks to processor cores. Due to its dynamic behavior, MTAPI is intended for optimizing throughput on multicore-systems, allowing the software developer to improve the task scheduling strategy for latency and fairness. This working group was chaired by Urs Gleim ofSiemens.
In 2013, theMulticore Programming Practices(MPP) working group delivered amulticoresoftware programming guide for the industry that aids in improving consistency and understanding ofmulticoreprogramming issues. The MPP guide provides best practices leveraging theC/C++language to generate a guide of genuine value to engineers who are approaching multicore programming. This working group was chaired by Rob Oshana ofNXP Semiconductorsand David Stewart of CriticalBlue.
in 2015, theSoftware/Hardware Interface for Multicore/Manycore(SHIM) working group delivered a specification to define an architecture description standard useful for software design. Some architectural features that SHIM describes are the hardware topology including processorcores,accelerators,caches, and inter-core communication channels, with selected details of each element, and instruction, memory, and communication performance information. This working group was chaired by Masaki Gondo of eSOL[1].
The OpenAMP Multicore Framework is anopen sourceframework for developing
asymmetric multi-processing (AMP) systems application software,[1]similar toOpenMPfor symmetric multi-processing systems.[2]
There are several implementations of OpenAMP Multicore Framework, each one intended to interoperate with all the other implementations over the OpenAMP API.
One implementation of the Multicore Framework, originally developed for the XilinxZynq, has been open sourced
under the OpenAMP open source project.[3][4]MentorEmbedded Multicore Framework (MEMF) is a proprietary implementation of the OpenAMP standard.[4]
The OpenAMP API standard is managed under the umbrella of Multicore Association.[4]
|
https://en.wikipedia.org/wiki/Multicore_Association
|
OpenCL C 3.0 revision V3.0.11[6]
C++ for OpenCL 1.0 and 2021[7]
OpenCL(Open Computing Language) is aframeworkfor writing programs that execute acrossheterogeneousplatforms consisting ofcentral processing units(CPUs),graphics processing units(GPUs),digital signal processors(DSPs),field-programmable gate arrays(FPGAs) and other processors orhardware accelerators. OpenCL specifies aprogramming language(based onC99) for programming these devices andapplication programming interfaces(APIs) to control the platform and execute programs on thecompute devices. OpenCL provides a standard interface forparallel computingusingtask-anddata-based parallelism.
OpenCL is an open standard maintained by theKhronos Group, anon-profit, open standards organisation. Conformant implementations (passed the Conformance Test Suite) are available from a range of companies includingAMD,Arm,Cadence,Google,Imagination,Intel,Nvidia,Qualcomm,Samsung,SPIandVerisilicon.[8][9]
OpenCL views a computing system as consisting of a number ofcompute devices, which might becentral processing units(CPUs) or "accelerators" such as graphics processing units (GPUs), attached to ahostprocessor (a CPU). It defines aC-like languagefor writing programs. Functions executed on an OpenCL device are called "kernels".[10]: 17A single compute device typically consists of severalcompute units, which in turn comprise multipleprocessing elements(PEs). A single kernel execution can run on all or many of the PEs in parallel. How a compute device is subdivided into compute units and PEs is up to the vendor; a compute unit can be thought of as a "core", but the notion of core is hard to define across all the types of devices supported by OpenCL (or even within the category of "CPUs"),[11]: 49–50and the number of compute units may not correspond to the number of cores claimed in vendors' marketing literature (which may actually be countingSIMD lanes).[12]
In addition to its C-like programming language, OpenCL defines anapplication programming interface(API) that allows programs running on the host to launch kernels on the compute devices and manage device memory, which is (at least conceptually) separate from host memory. Programs in the OpenCL language are intended to becompiled at run-time, so that OpenCL-using applications are portable between implementations for various host devices.[13]The OpenCL standard defines host APIs forCandC++; third-party APIs exist for other programming languages and platforms such asPython,[14]Java,Perl,[15]D[16]and.NET.[11]: 15Animplementationof the OpenCL standard consists of alibrarythat implements the API for C and C++, and an OpenCL Ccompilerfor the compute devices targeted.
In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, theStandard Portable Intermediate Representation(SPIR)[17]can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end.
More recentlyKhronos Grouphas ratifiedSYCL,[18]a higher-level programming model for OpenCL as a single-sourceeDSLbased on pureC++17to improveprogramming productivity. People interested by C++ kernels but not by SYCL single-source programming style can use C++ features with compute kernel sources written in "C++ for OpenCL" language.[19]
OpenCL defines a four-levelmemory hierarchyfor the compute device:[13]
Not every device needs to implement each level of this hierarchy in hardware.Consistencybetween the various levels in the hierarchy is relaxed, and only enforced by explicitsynchronizationconstructs, notablybarriers.
Devices may or may not share memory with the host CPU.[13]The host API provideshandleson device memory buffers and functions to transfer data back and forth between host and devices.
The programming language that is used to writecompute kernelsis called kernel language. OpenCL adoptsC/C++-based languages to specify the kernel computations performed on the device with some restrictions and additions to facilitate efficient mapping to the heterogeneous hardware resources of accelerators. Traditionally OpenCL C was used to program the accelerators in OpenCL standard, later C++ for OpenCL kernel language was developed that inherited all functionality from OpenCL C but allowed to use C++ features in the kernel sources.
OpenCL C[20]is aC99-based language dialect adapted to fit the device model in OpenCL. Memory buffers reside in specific levels of thememory hierarchy, andpointersare annotated with the region qualifiers__global,__local,__constant, and__private, reflecting this. Instead of a device program having amainfunction, OpenCL C functions are marked__kernelto signal that they areentry pointsinto the program to be called from the host program.Function pointers,bit fieldsandvariable-length arraysare omitted, andrecursionis forbidden.[21]TheC standard libraryis replaced by a custom set of standard functions, geared toward math programming.
OpenCL C is extended to facilitate use ofparallelismwith vector types and operations, synchronization, and functions to work with work-items and work-groups.[21]In particular, besides scalar types such asfloatanddouble, which behave similarly to the corresponding types in C, OpenCL provides fixed-length vector types such asfloat4(4-vector of single-precision floats); such vector types are available in lengths two, three, four, eight and sixteen for various base types.[20]: § 6.1.2Vectorizedoperations on these types are intended to map ontoSIMDinstructions sets, e.g.,SSEorVMX, when running OpenCL programs on CPUs.[13]Other specialized types include 2-d and 3-d image types.[20]: 10–11
The following is amatrix–vector multiplicationalgorithm in OpenCL C.
The kernel functionmatveccomputes, in each invocation, thedot productof a single row of a matrixAand a vectorx:
yi=ai,:⋅x=∑jai,jxj.{\displaystyle y_{i}=a_{i,:}\cdot x=\sum _{j}a_{i,j}x_{j}.}
To extend this into a full matrix–vector multiplication, the OpenCL runtimemapsthe kernel over the rows of the matrix. On the host side, theclEnqueueNDRangeKernelfunction does this; it takes as arguments the kernel to execute, its arguments, and a number of work-items, corresponding to the number of rows in the matrixA.
This example will load afast Fourier transform(FFT) implementation and execute it. The implementation is shown below.[22]The code asks the OpenCL library for the first available graphics card, creates memory buffers for reading and writing (from the perspective of the graphics card),JIT-compilesthe FFT-kernel and then finally asynchronously runs the kernel. The result from the transform is not read in this example.
The actual calculation inside file "fft1D_1024_kernel_src.cl" (based on "Fitting FFT onto the G80 Architecture"):[23]
A full, open source implementation of an OpenCL FFT can be found on Apple's website.[24]
In 2020, Khronos announced[25]the transition to the community driven C++ for OpenCL programming language[26]that provides features fromC++17in combination with the traditional OpenCL C features. This language allows to leverage a rich variety of language features from standard C++ while preserving backward compatibility to OpenCL C. This opens up a smooth transition path to C++ functionality for the OpenCL kernel code developers as they can continue using familiar programming flow and even tools as well as leverage existing extensions and libraries available for OpenCL C.
The language semantics is described in the documentation published in the releases of OpenCL-Docs[27]repository hosted by the Khronos Group but it is currently not ratified by the Khronos Group. The C++ for OpenCL language is not documented in a stand-alone document and it is based on the specification of C++ and OpenCL C. The open sourceClangcompiler has supported C++ for OpenCL since release 9.[28]
C++ for OpenCL has been originally developed as a Clang compiler extension and appeared in the release 9.[29]As it was tightly coupled with OpenCL C and did not contain any Clang specific functionality its documentation has been re-hosted to the OpenCL-Docs repository[27]from the Khronos Group along with the sources of other specifications and reference cards. The first official release of this document describing C++ for OpenCL version 1.0 has been published in December 2020.[30]C++ for OpenCL 1.0 contains features from C++17 and it is backward compatible with OpenCL C 2.0. In December 2021, a new provisional C++ for OpenCL version 2021 has been released which is fully compatible with the OpenCL 3.0 standard.[31]A work in progress draft of the latest C++ for OpenCL documentation can be found on the Khronos website.[32]
C++ for OpenCL supports most of the features (syntactically and semantically) from OpenCL C except for nested parallelism and blocks.[33]However, there are minor differences in some supported features mainly related to differences in semantics between C++ and C. For example, C++ is more strict with the implicit type conversions and it does not support therestricttype qualifier.[33]The following C++ features are not supported by C++ for OpenCL: virtual functions,dynamic_castoperator, non-placementnew/deleteoperators, exceptions, pointer to member functions, references to functions, C++ standard libraries.[33]C++ for OpenCL extends the concept of separate memory regions (address spaces) from OpenCL C to C++ features – functional casts, templates, class members, references, lambda functions, and operators. Most of C++ features are not available for the kernel functions e.g. overloading or templating, arbitrary class layout in parameter type.[33]
The following code snippet illustrates how kernels withcomplex-numberarithmetic can be implemented in C++ for OpenCL language with convenient use of C++ features.
C++ for OpenCL language can be used for the same applications or libraries and in the same way as OpenCL C language is used. Due to the rich variety of C++ language features, applications written in C++ for OpenCL can express complex functionality more conveniently than applications written in OpenCL C and in particulargeneric programmingparadigm from C++ is very attractive to the library developers.
C++ for OpenCL sources can be compiled by OpenCL drivers that supportcl_ext_cxx_for_openclextension.[34]Armhas announced support for this extension in December 2020.[35]However, due to increasing complexity of the algorithms accelerated on OpenCL devices, it is expected that more applications will compile C++ for OpenCL kernels offline using stand alone compilers such as Clang[36]into executable binary format or portable binary format e.g. SPIR-V.[37]Such an executable can be loaded during the OpenCL applications execution using a dedicated OpenCL API.[38]
Binaries compiled from sources in C++ for OpenCL 1.0 can be executed on OpenCL 2.0 conformant devices. Depending on the language features used in such kernel sources it can also be executed on devices supporting earlier OpenCL versions or OpenCL 3.0.
Aside from OpenCL drivers kernels written in C++ for OpenCL can be compiled for execution on Vulkan devices using clspv[39]compiler and clvk[40]runtime layer just the same way as OpenCL C kernels.
C++ for OpenCL is an open language developed by the community of contributors listed in its documentation.[32]New contributions to the language semantic definition or open source tooling support are accepted from anyone interested as soon as they are aligned with the main design philosophy and they are reviewed and approved by the experienced contributors.[19]
OpenCL was initially developed byApple Inc., which holdstrademarkrights, and refined into an initial proposal in collaboration with technical teams atAMD,IBM,Qualcomm,Intel, andNvidia. Apple submitted this initial proposal to theKhronos Group. On June 16, 2008, the Khronos Compute Working Group was formed[41]with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[42]This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.[43]
OpenCL 1.0 released withMac OS X Snow Leopardon August 28, 2009. According to an Apple press release:[44]
Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.
AMD decided to support OpenCL instead of the now deprecatedClose to Metalin itsStream framework.[45][46]RapidMindannounced their adoption of OpenCL underneath their development platform to support GPUs from multiple vendors with one interface.[47]On December 9, 2008, Nvidia announced its intention to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit.[48]On October 30, 2009, IBM released its first OpenCL implementation as a part of theXL compilers.[49]
Acceleration of calculations with factor to 1000 are possible with OpenCL in graphic cards against normal CPU.[citation needed]Some important features of next Version of OpenCL are optional in 1.0 like double- or half-precision operations.[50]
OpenCL 1.1 was ratified by the Khronos Group on June 14, 2010,[51]and adds significant functionality for enhanced parallel programming flexibility, functionality, and performance including:
On November 15, 2011, the Khronos Group announced the OpenCL 1.2 specification,[52]which added significant functionality over the previous versions in terms of performance and features for parallel programming. Most notable features include:
On November 18, 2013, the Khronos Group announced the ratification and public release of the finalized OpenCL 2.0 specification.[54]Updates and additions to OpenCL 2.0 include:
The ratification and release of the OpenCL 2.1 provisional specification was announced on March 3, 2015, at the Game Developer Conference in San Francisco. It was released on November 16, 2015.[55]It introduced the OpenCL C++ kernel language, based on a subset ofC++14, while maintaining support for the preexisting OpenCL C kernel language.Vulkanand OpenCL 2.1 shareSPIR-Vas anintermediate representationallowing high-level language front-ends to share a common compilation target. Updates to the OpenCL API include:
AMD,ARM, Intel, HPC, and YetiWare have declared support for OpenCL 2.1.[56][57]
OpenCL 2.2 brings the OpenCL C++ kernel language into the core specification for significantly enhanced parallel programming productivity.[58][59][60]It was released on May 16, 2017.[61]Maintenance Update released in May 2018 with bugfixes.[62]
The OpenCL 3.0 specification was released on September 30, 2020, after being in preview since April 2020. OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional. The specification retains the OpenCL C language and deprecates the OpenCL C++ Kernel Language, replacing it with the C++ for OpenCL language[19]based on aClang/LLVMcompiler which implements a subset ofC++17andSPIR-Vintermediate code.[63][64][65]Version 3.0.7 of C++ for OpenCL with some Khronos openCL extensions were presented at IWOCL 21.[66]Actual is 3.0.11 with some new extensions and corrections.
NVIDIA, working closely with the Khronos OpenCL Working Group, improved Vulkan Interop with semaphores and memory sharing.[67]Last minor update was 3.0.14 with bugfix and a new extension for multiple devices.[68]
When releasing OpenCL 2.2, the Khronos Group announced that OpenCL would converge where possible withVulkanto enable OpenCL software deployment flexibility over both APIs.[69][70]This has been now demonstrated by Adobe's Premiere Rush using the clspv[39]open source compiler to compile significant amounts of OpenCL C kernel code to run on a Vulkan runtime for deployment on Android.[71]OpenCL has a forward looking roadmap independent of Vulkan, with 'OpenCL Next' under development and targeting release in 2020. OpenCL Next may integrate extensions such as Vulkan / OpenCL Interop, Scratch-Pad Memory Management, Extended Subgroups, SPIR-V 1.4 ingestion and SPIR-V Extended debug info. OpenCL is also considering Vulkan-like loader and layers and a "flexible profile" for deployment flexibility on multiple accelerator types.[72]
OpenCL consists of a set of headers and ashared objectthat is loaded at runtime. An installable client driver (ICD) must be installed on the platform for every class of vendor for which the runtime would need to support. That is, for example, in order to support Nvidia devices on a Linux platform, the Nvidia ICD would need to be installed such that the OpenCL runtime (the ICD loader) would be able to locate the ICD for the vendor and redirect the calls appropriately. The standard OpenCL header is used by the consumer application; calls to each function are then proxied by the OpenCL runtime to the appropriate driver using the ICD. Each vendor must implement each OpenCL call in their driver.[73]
The Apple,[74]Nvidia,[75]ROCm,RapidMind[76]andGallium3D[77]implementations of OpenCL are all based on theLLVMCompiler technology and use theClangcompiler as their frontend.
As of 2016, OpenCL runs ongraphics processing units(GPUs),CPUswithSIMDinstructions,FPGAs,Movidius Myriad 2,Adapteva EpiphanyandDSPs.
To be officially conformant, an implementation must pass the Khronos Conformance Test Suite (CTS), with results being submitted to the Khronos Adopters Program.[174]The Khronos CTS code for all OpenCL versions has been available in open source since 2017.[175]
TheKhronos Groupmaintains an extended list of OpenCL-conformant products.[4]
[183]
All standard-conformant implementations can be queried using one of the clinfo tools (there are multiple tools with the same name and similar feature set).[186][187][188]
Products and their version of OpenCL support include:[189]
All hardware with OpenCL 1.2+ is possible, OpenCL 2.x only optional, Khronos Test Suite available since 2020-10[190][191]
None yet: Khronos Test Suite ready, with Driver Update all Hardware with 2.0 and 2.1 support possible
A key feature of OpenCL is portability, via its abstracted memory andexecution model, and the programmer is not able to directly use hardware-specific technologies such as inlineParallel Thread Execution(PTX) for Nvidia GPUs unless they are willing to give up direct portability on other platforms. It is possible to run any OpenCL kernel on any conformant implementation.
However, performance of the kernel is not necessarily portable across platforms. Existing implementations have been shown to be competitive when kernel code is properly tuned, though, andauto-tuninghas been suggested as a solution to the performance portability problem,[194]yielding "acceptable levels of performance" in experimental linear algebra kernels.[195]Portability of an entire application containing multiple kernels with differing behaviors was also studied, and shows that portability only required limited tradeoffs.[196]
A study atDelft Universityfrom 2011 that comparedCUDAprograms and their straightforward translation into OpenCL C found CUDA to outperform OpenCL by at most 30% on the Nvidia implementation. The researchers noted that their comparison could be made fairer by applying manual optimizations to the OpenCL programs, in which case there was "no reason for OpenCL to obtain worse performance than CUDA". The performance differences could mostly be attributed to differences in the programming model (especially the memory model) and to NVIDIA's compiler optimizations for CUDA compared to those for OpenCL.[194]
Another study at D-Wave Systems Inc. found that "The OpenCL kernel’s performance is between about 13% and 63% slower, and the end-to-end time is between about 16% and 67% slower" than CUDA's performance.[197]
The fact that OpenCL allows workloads to be shared by CPU and GPU, executing the same programs, means that programmers can exploit both by dividing work among the devices.[198]This leads to the problem of deciding how to partition the work, because the relative speeds of operations differ among the devices.Machine learninghas been suggested to solve this problem: Grewe and O'Boyle describe a system ofsupport-vector machinestrained on compile-time features of program that can decide the device partitioning problem statically, without actually running the programs to measure their performance.[199]
In a comparison of actual graphic cards of AMD RDNA 2 and Nvidia RTX Series there is an undecided result by OpenCL-Tests. Possible performance increases from the use of Nvidia CUDA or OptiX were not tested.[200]
|
https://en.wikipedia.org/wiki/OpenCL
|
Incomputer science, aparallel random-access machine(parallel RAMorPRAM) is ashared-memoryabstract machine. As its name indicates, the PRAM is intended as the parallel-computing analogy to therandom-access machine(RAM) (not to be confused withrandom-access memory).[1]In the same way that the RAM is used by sequential-algorithm designers to model algorithmic performance (such as time complexity), the PRAM is used by parallel-algorithm designers to model parallel algorithmic performance (such as time complexity, where the number of processors assumed is typically also stated). Similar to the way in which the RAM model neglects practical issues, such as access time to cache memory versus main memory, the PRAM model neglects such issues assynchronizationandcommunication, but provides any (problem-size-dependent) number of processors. Algorithm cost, for instance, is estimated using two parameters O(time) and O(time × processor_number).
Read/write conflicts, commonly termed interlocking in accessing the same shared memory location simultaneously are resolved by one of the following strategies:
Here, E and C stand for 'exclusive' and 'concurrent' respectively. The read causes no discrepancies while the concurrent write is further defined as:
Several simplifying assumptions are made while considering the development of algorithms for PRAM. They are:
These kinds of algorithms are useful for understanding the exploitation of concurrency, dividing the original problem into similar sub-problems and solving them in parallel. The introduction of the formal 'P-RAM' model in Wyllie's 1979 thesis[4]had the aim of quantifying analysis of parallel algorithms in a way analogous to theTuring Machine. The analysis focused on a MIMD model of programming using a CREW model but showed that many variants, including implementing a CRCW model and implementing on an SIMD machine, were possible with only constant overhead.
PRAM algorithms cannot be parallelized with the combination ofCPUanddynamic random-access memory(DRAM) because DRAM does not allow concurrent access to a single bank (not even different addresses in the bank); but they can be implemented in hardware or read/write to the internalstatic random-access memory(SRAM) blocks of afield-programmable gate array(FPGA), it can be done using a CRCW algorithm.
However, the test for practical relevance of PRAM (or RAM) algorithms depends on whether their cost model provides an effective abstraction of some computer; the structure of that computer can be quite different than the abstract model. The knowledge of the layers of software and hardware that need to be inserted is beyond the scope of this article. But, articles such asVishkin (2011)demonstrate how a PRAM-like abstraction can be supported by theexplicit multi-threading(XMT) paradigm and articles such asCaragea & Vishkin (2011)demonstrate that a PRAM algorithm for themaximum flow problemcan provide strong speedups relative to the fastest serial program for the same problem. The articleGhanim, Vishkin & Barua (2018)demonstrated that PRAM algorithms as-is can achieve competitive performance even without any additional effort to cast them as multi-threaded programs on XMT.
This is an example ofSystemVerilogcode which finds the maximum value in the array in only 2 clock cycles. It compares all the combinations of the elements in the array at the first clock, and merges the result at the second clock. It uses CRCW memory;m[i] <= 1andmaxNo <= data[i]are written concurrently. The concurrency causes no conflicts because the algorithm guarantees that the same value is written to the same memory. This code can be run onFPGAhardware.
|
https://en.wikipedia.org/wiki/Parallel_random_access_machine
|
Arace conditionorrace hazardis the condition of anelectronics,software, or othersystemwhere the system's substantive behavior isdependenton the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes abugwhen one or more of the possible behaviors is undesirable.
The termrace conditionwas already in use by 1954, for example inDavid A. Huffman's doctoral thesis "The synthesis of sequential switching circuits".[1]
Race conditions can occur especially inlogic circuitsormultithreadedordistributedsoftware programs. Usingmutual exclusioncan prevent race conditions in distributed software systems.
A typical example of a race condition may occur when alogic gatecombines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate suchglitchesbut if this output functions as aclock signalfor further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch).
Consider, for example, a two-inputAND gatefed with the following logic:output=A∧A¯{\displaystyle {\text{output}}=A\wedge {\overline {A}}}A logic signalA{\displaystyle A}on one input and its negation,¬A{\displaystyle \neg A}(the ¬ is aBoolean negation), on another input in theory never output a true value:A∧A¯≠1{\displaystyle A\wedge {\overline {A}}\neq 1}. If, however, changes in the value ofA{\displaystyle A}take longer to propagate to the second input than the first whenA{\displaystyle A}changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true.[2]
A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches.
Acritical race conditionoccurs when the order in which internal variables are changed determines the eventual state that thestate machinewill end up in.
Anon-critical race conditionoccurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in.
Astatic race conditionoccurs when a signal and its complement are combined.
Adynamic race conditionoccurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating.
Anessential race conditionoccurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductivedelay lineelements to effectively increase the time duration of an input signal.
Design techniques such asKarnaugh mapsencourage designers to recognize and eliminate race conditions before they cause problems. Oftenlogic redundancycan be added to eliminate some kinds of races.
As well as these problems, some logic elements can entermetastable states, which create further problems for circuit designers.
A race condition can arise in software when a computer program has multiple code paths that are executing at the same time. If the multiple code paths take a different amount of time than expected, they can finish in a different order than expected, which can cause software bugs due to unanticipated behavior. A race can also occur between two programs, resulting in security issues.
Critical race conditions cause invalid execution andsoftware bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done incritical sectionsthat must bemutually exclusive. Failure to obey this rule can corrupt the shared state.
A data race is a type of race condition. Data races are important parts of various formalmemory models. The memory model defined in theC11andC++11standards specify that a C or C++ program containing a data race hasundefined behavior.[3][4]
A race condition can be difficult to reproduce and debug because the end result isnondeterministicand depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design.
Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place:
In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization (viasemaphores), the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario:
In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location.
Not everyone regards data races as a subset of race conditions.[5]The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to havenondeterminismdue to timing even in a program without data races, for example, in a program in which all memory accesses use onlyatomic operations.
This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written.
On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are calledatomicorsynchronizationoperations, whereas the ordinary operations (which are unsafe for simultaneous access) are calleddataoperations. This is probably why the term isdatarace; on many platforms, where there is a race condition involving onlysynchronizationoperations, such a race may be nondeterministic but otherwise safe; but adatarace could lead to memory corruption or undefined behavior.
The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied.
TheC++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14)[6]
Two actions arepotentially concurrentif
The execution of a program contains adata raceif it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior.
The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions ofdata race.
The paperDetecting Data Races on Weak Memory Systems[7]provides a different definition:
"two memory operationsconflictif they access the same location and at least one of them is a write operation ...
"Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉,iffx and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is adata raceiff at least one of x or y is a data operation.
Here we have two memory operations accessing the same location, one of which is a write.
The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "... and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "... and X and Y are potentially concurrent".
The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations".
TheJava Language Specification[8]provides a different definition:
Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write ... When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race ... a data race cannot cause incorrect behavior such as returning the wrong length for an array.
A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions".[8]This means that in C++, an attempt to execute a program containing a data race could (while still adhering to the spec) crash or could exhibit insecure or bizarre behavior, whereas in Java, an attempt to execute a program containing a data race may produce undesired concurrency behavior but is otherwise (assuming that the implementation adheres to the spec) safe.
An important facet of data races is that in some contexts, a program that is free of data races is guaranteed to execute in asequentially consistentmanner, greatly easing reasoning about the concurrent behavior of the program. Formal memory models that provide such a guarantee are said to exhibit an "SC for DRF" (Sequential Consistency for Data Race Freedom) property. This approach has been said to have achieved recent consensus (presumably compared to approaches which guarantee sequential consistency in all cases, or approaches which do not guarantee it at all).[9]
For example, in Java, this guarantee is directly specified:[8]
A program is correctly synchronized if and only if all sequentially consistent executions are free of data races.
If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3).
This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code.
A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered. The use of correct synchronization does not ensure that the overall behavior of a program is correct. However, its use does allow a programmer to reason about the possible behaviors of a program in a simple way; the behavior of a correctly synchronized program is much less dependent on possible reorderings. Without correct synchronization, very strange, confusing and counterintuitive behaviors are possible.
By contrast, a draft C++ specification does not directly require an SC for DRF property, but merely observes that there exists a theorem providing it:
[Note:It can be shown that programs that correctly use mutexes and memory_order_seq_cst operations to prevent all data races and use no other synchronization operations behave as if the operations executed by their constituent threads were simply interleaved, with each value computation of an object being taken from the last side effect on that object in that interleaving. This is normally referred to as “sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation.— end note
Note that the C++ draft specification admits the possibility of programs that are valid but use synchronization operations with a memory_order other than memory_order_seq_cst, in which case the result may be a program which is correct but for which no guarantee of sequentially consistency is provided. In other words, in C++, some correct programs are not sequentially consistent. This approach is thought to give C++ programmers the freedom to choose faster program execution at the cost of giving up ease of reasoning about their program.[9]
There are various theorems, often provided in the form of memory models, that provide SC for DRF guarantees given various contexts. The premises of these theorems typically place constraints upon both the memory model (and therefore upon the implementation), and also upon the programmer; that is to say, typically it is the case that there are programs which do not meet the premises of the theorem and which could not be guaranteed to execute in a sequentially consistent manner.
The DRF1 memory model[10]provides SC for DRF and allows the optimizations of the WO (weak ordering), RCsc (Release Consistencywith sequentially consistent special operations), VAX memory model, and data-race-free-0 memory models. The PLpc memory model[11]provides SC for DRF and allows the optimizations of the TSO (Total Store Order), PSO, PC (Processor Consistency), and RCpc (Release Consistencywith processor consistency special operations) models. DRFrlx[12]provides a sketch of an SC for DRF theorem in the presence of relaxed atomics.
Many software race conditions have associatedcomputer securityimplications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in effects includingdenial of service[13]andprivilege escalation.[14][15]
A specific kind of race condition involves checking for a predicate (e.g. forauthentication), then acting on the predicate, while the state can change between thetime-of-checkand thetime-of-use. When this kind ofbugexists in security-sensitive code, asecurity vulnerabilitycalled atime-of-check-to-time-of-use(TOCTTOU) bug is created.
Race conditions are also intentionally used to createhardware random number generatorsandphysically unclonable functions.[16][citation needed]PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity.
Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation.[14]File lockingprovides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running adaemonor the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level.
A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred withthe near loss of the Mars Rover "Spirit"not long after landing, which occurred due to deleted file entries causing the file system library to consume all available memory space.[17]A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable.
In networking, consider a distributed chat network likeIRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largelysolvedby various IRC server implementations.)
In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, thelatencyacross the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation).
Race conditions can also exist when a computer program is written withnon-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link.
Software flaws inlife-critical systemscan be disastrous. Race conditions were among the flaws in theTherac-25radiation therapymachine, which led to the death of at least three patients and injuries to several more.[18]
Another example is theenergy management systemprovided byGE Energyand used byOhio-basedFirstEnergy Corp(among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to theNorth American Blackout of 2003.[19]GE Energy later developed a software patch to correct the previously undiscovered error.
Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:static analysistools anddynamic analysistools.
Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented inClang, supporting PThreads.[20][non-primary source needed]
Dynamic analysis tools include:
There are several benchmarks designed to evaluate the effectiveness of data race detection tools
Race conditions are a common concern in human-computerinteraction designand softwareusability. Intuitively designed human-machine interfaces require that the user receives feedback on their actions that align with their expectations, but system-generated actions can interrupt a user's current action or workflow in unexpected ways, such as inadvertently answering or rejecting an incoming call on a smartphone while performing a different task.[citation needed]
InUK railway signalling, a race condition would arise in the carrying out ofRule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, atWinwickin 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio.
Race conditions are not confined to digital systems. Neuroscience is demonstrating that race conditions can occur in mammal brains as well, for example.[25][26]
|
https://en.wikipedia.org/wiki/Race_condition
|
Incomputer science, athreadofexecutionis the smallest sequence of programmed instructions that can be managed independently by ascheduler, which is typically a part of theoperating system.[1]In many cases, a thread is a component of aprocess.
The multiple threads of a given process may be executedconcurrently(via multithreading capabilities), sharing resources such asmemory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of itsdynamically allocatedvariables and non-thread-localglobal variablesat any given time.
The implementation of threads andprocessesdiffers between operating systems.[2][page needed]
Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of theOS/360control system, of whichMultiprogramming with a Variable Number of Tasks(MVT) was one. Saltzer (1966) creditsVictor A. Vyssotskywith the term "thread".[3]
The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.[4]
Scheduling can be done at the kernel level or user level, and multitasking can be donepreemptivelyorcooperatively. This yields a variety of related concepts.
At the kernel level, aprocesscontains one or morekernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as aruntime systemcan itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes,[5]while if they share data they are usually called(user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known asfibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads.
Aprocessis a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes ownresourcesallocated by the operating system. Resources include memory (for both code and data),file handles, sockets, device handles, windows, and aprocess control block. Processes areisolatedbyprocess isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – seeinterprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost ofcontext switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untaggedtranslation lookaside buffer(TLB), notably on x86).
Akernel threadis a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's processscheduleris preemptive. Kernel threads do not own resources except for astack, a copy of theregistersincluding theprogram counter, andthread-local storage(if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped.
Threads are sometimes implemented inuserspacelibraries, thus calleduser threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit frommulti-processormachines (M:N model). User threads as implemented byvirtual machinesare also calledgreen threads.
As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.
However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing.
A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/awaitprimitives[6]).
Fibersare an even lighter unit of scheduling which arecooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel oruser threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of theOpenMPparallel programming model implement their tasks through fibers.[7][8]Closely related to fibers arecoroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct.
Threads differ from traditionalmultitaskingoperating-systemprocessesin several ways:
Systems such asWindows NTandOS/2are said to havecheapthreads andexpensiveprocesses; in other operating systems there is not so great a difference except in the cost of anaddress-spaceswitch, which on some architectures (notablyx86) results in atranslation lookaside buffer(TLB) flush.
Advantages and disadvantages of threads vs processes include:
Operating systems schedule threads eitherpreemptivelyorcooperatively.Multi-user operating systemsgenerally favorpreemptive multithreadingfor its finer-grained control over execution time viacontext switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causinglock convoy,priority inversion, or other side-effects. In contrast,cooperative multithreadingrelies on threads to relinquish control of execution, thus ensuring that threadsrun to completion. This can cause problems if a cooperatively multitasked threadblocksby waiting on aresourceor if itstarvesother threads by not yielding control of execution during intensive computation.
Until the early 2000s, most desktop computers had only one single-core CPU, with no support forhardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-processcontext switches. In 2002,Inteladded support forsimultaneous multithreadingto thePentium 4processor, under the namehyper-threading; in 2005, they introduced the dual-corePentium Dprocessor andAMDintroduced the dual-coreAthlon 64 X2processor.
Systems with a single processor generally implement multithreading bytime slicing: thecentral processing unit(CPU) switches between differentsoftware threads. Thiscontext switchingusually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On amultiprocessorormulti-coresystem, multiple threads can execute inparallel, with every processor or core executing a separate thread simultaneously; on a processor or core withhardware threads, separate software threads can also be executed concurrently by separate hardware threads.
Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel[9]are the simplest possible threading implementation.OS/2andWin32used this approach from the start, while onLinuxtheGNU C Libraryimplements this approach (via theNPTLor olderLinuxThreads). This approach is also used bySolaris,NetBSD,FreeBSD,macOS, andiOS.
AnM:1 model implies that all application-level threads map to one kernel-level scheduled entity;[9]the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration onmultithreadedprocessors ormulti-processorcomputers: there is never more than one thread being scheduled at the same time.[9]For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. TheGNU Portable Threadsuses User-level threading, as doesState Threads.
M:Nmaps someMnumber of application threads onto someNnumber of kernel entities,[9]or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required[clarification needed]. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood ofpriority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
SunOS4.x implementedlight-weight processesor LWPs.NetBSD2.x+, andDragonFly BSDimplement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model.[10]FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model.
Incomputer programming,single-threadingis the processing of oneinstructionat a time.[11]In the formal analysis of the variables'semanticsand process state, the termsingle threadingcan be used differently to mean "backtracking within a single thread", which is common in thefunctional programmingcommunity.[12]
Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enableparallel executionon amultiprocessingsystem.
Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions.
Threads in the same process share the same address space. This allows concurrently running code tocoupletightly and conveniently exchange data without the overhead or complexity of anIPC. When shared between threads, however, even simple data structures become prone torace conditionsif they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate.
To prevent this, threadingapplication programming interfaces(APIs) offersynchronization primitivessuch asmutexestolockdata structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in aspinlock. Both of these may sap performance and force processors insymmetric multiprocessing(SMP) systems to contend for the memory bus, especially if thegranularityof the locking is too fine.
Other synchronization APIs includecondition variables,critical sections,semaphores, andmonitors.
A popular programming pattern involving threads is that ofthread poolswhere a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management.
Multithreaded applications have the following advantages vs single-threaded ones:
Multithreaded applications have the following drawbacks:
Many programming languages support threading in some capacity.
|
https://en.wikipedia.org/wiki/Thread_(computer_science)
|
Incomputing,dataflowis a broad concept, which has various meanings depending on the application and context. In the context ofsoftware architecture, data flow relates tostream processingorreactive programming.
Dataflow computingis a software paradigm based on the idea of representing computations as adirected graph, where nodes are computations and data flow along the edges.[1]Dataflow can also be calledstream processingorreactive programming.[2]
There have been multiple data-flow/stream processing languages of various forms (seeStream processing). Data-flow hardware (seeDataflow architecture) is an alternative to the classicvon Neumann architecture. The most obvious example of data-flow programming is the subset known asreactive programmingwith spreadsheets. As a user enters new values, they are instantly transmitted to the next logical "actor" or formula for calculation.
Distributed data flowshave also been proposed as a programming abstraction that captures the dynamics of distributed multi-protocols. The data-centric perspective characteristic of data flow programming promotes high-level functional specifications and simplifies formal reasoning about system components.
Hardware architectures for dataflow was a major topic incomputer architectureresearch in the 1970s and early 1980s.Jack Dennisof theMassachusetts Institute of Technology(MIT) pioneered the field of static dataflow architectures. Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them. Designs that usecontent-addressable memoryare called dynamic dataflow machines byArvind. They use tags in memory to facilitate parallelism.
Data flows around the computer through the components of the computer. It gets entered from the input devices and can leave through output devices (printer etc.).
A dataflow network is a network of concurrently executing processes or automata that can communicate by sending data overchannels(seemessage passing.)
InKahn process networks, named afterGilles Kahn, the processes aredeterminate. This implies that each determinate process computes acontinuous functionfrom input streams to output streams, and that a network of determinate processes is itself determinate, thus computing a continuous function. This implies that the behavior of such networks can be described by a set of recursive equations, which can be solved usingfixed point theory. The movement and transformation of the data is represented by a series of shapes and lines.
Dataflow can also refer to:
The dictionary definition ofdataflowat Wiktionary
|
https://en.wikipedia.org/wiki/Dataflow
|
Incomputer architecture,Amdahl's law(orAmdahl's argument[1]) is a formula that shows how much faster a task can be completed when more resources are added to the system.
The law can be stated as:
"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used".[2]
It is named aftercomputer scientistGene Amdahl, and was presented at theAmerican Federation of Information Processing Societies(AFIPS) Spring Joint Computer Conference in 1967.
Amdahl's law is often used inparallel computingto predict the theoreticalspeedupwhen using multiple processors.
In the context of Amdahl's law, speedup can be defined as:[3]
Speedup=Performance for the entire task when enhancements are appliedPerformance for the same task without those enhancements{\displaystyle {\text{Speedup}}={\frac {\text{Performance for the entire task when enhancements are applied}}{\text{Performance for the same task without those enhancements}}}}
or
Speedup=Execution time for the entire task without enhancementsExecution time for the same task when enhancements are applied{\displaystyle {\text{Speedup}}={\frac {\text{Execution time for the entire task without enhancements}}{\text{Execution time for the same task when enhancements are applied}}}}
Amdahl's law can be formulated in the following way:[4]
where
TheSpeedupoverall{\displaystyle {\text{Speedup}}_{\text{overall}}}is frequently much lower than one might expect. For instance, if a programmer enhances a part of the code that represents 10% of the total execution time (i.e.Timeoptimized{\displaystyle {\text{Time}}_{\text{optimized}}}of 0.10) and achieves aSpeedupoptimized{\displaystyle {\text{Speedup}}_{\text{optimized}}}of 10,000, thenSpeedupoverall{\displaystyle {\text{Speedup}}_{\text{overall}}}becomes 1.11 which means only 11% improvement in total speedup of the program. So, despite a massive improvement in one section, the overall benefit is quite small. In another example, if the programmer optimizes a section that accounts for 99% of the execution time (i.e.Timeoptimized{\displaystyle {\text{Time}}_{\text{optimized}}}of 0.99) with a speedup factor of 100 (i.e.Speedupoptimized{\displaystyle {\text{Speedup}}_{\text{optimized}}}of 100), theSpeedupoverall{\displaystyle {\text{Speedup}}_{\text{overall}}}only reaches 50. This indicates that half of the potential performance gain (Speedupoverall{\displaystyle {\text{Speedup}}_{\text{overall}}}will reach 100 if 100% of the execution time is covered) is lost due to the remaining 1% of execution time that was not improved.[4]
Followings are implications of Amdahl's law:[5][6]
Followings are limitations of Amdahl's law:[7][3][8]
Amdahl's law applies only to the cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case,Gustafson's lawgives a less pessimistic and more realistic assessment of the parallel performance.[10]
Universal Scalability Law(USL), developed byNeil J. Gunther, extends the Amdahl's law and accounts for the additional overhead due tointer-process communication. USL quantifies scalability based on parameters such as contention and coherency.[11]
A task executed by a system whose resources are improved compared to an initial similar system can be split up into two parts:
An example is a computer program that processes files. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separatethreadfor processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can.
The execution time of the whole task before the improvement of the resources of the system is denoted asT{\displaystyle T}. It includes the execution time of the part that would not benefit from the improvement of the resources and the execution time of the one that would benefit from it. The fraction of the execution time of the task that would benefit from the improvement of the resources is denoted byp{\displaystyle p}. The one concerning the part that would not benefit from it is therefore1−p{\displaystyle 1-p}. Then:
It is the execution of the part that benefits from the improvement of the resources that is accelerated by the factors{\displaystyle s}after the improvement of the resources. Consequently, the execution time of the part that does not benefit from it remains the same, while the part that benefits from it becomes:
The theoretical execution timeT(s){\displaystyle T(s)}of the whole task after the improvement of the resources is then:
Amdahl's law gives the theoreticalspeedupinlatencyof the execution of the whole taskat fixed workloadW{\displaystyle W}, which yields
If 30% of the execution time may be the subject of a speedup,pwill be 0.3; if the improvement makes the affected part twice as fast,swill be 2. Amdahl's law states that the overall speedup of applying the improvement will be:
For example, assume that we are given a serial task which is split into four consecutive parts, whose percentages of execution time arep1 = 0.11,p2 = 0.18,p3 = 0.23, andp4 = 0.48respectively. Then we are told that the 1st part is not sped up, sos1 = 1, while the 2nd part is sped up 5 times, sos2 = 5, the 3rd part is sped up 20 times, sos3 = 20, and the 4th part is sped up 1.6 times, sos4 = 1.6. By using Amdahl's law, the overall speedup is
Notice how the 5 times and 20 times speedup on the 2nd and 3rd parts respectively don't have much effect on the overall speedup when the 4th part (48% of the execution time) is accelerated by only 1.6 times.
For example, with a serial program in two partsAandBfor whichTA= 3 sandTB= 1 s,
Therefore, making partAto run 2 times faster is better than making partBto run 5 times faster. The percentage improvement in speed can be calculated as
If the non-parallelizable part is optimized by a factor ofO{\displaystyle O}, then
It follows from Amdahl's law that the speedup due to parallelism is given by
Whens=1{\displaystyle s=1}, we haveSlatency(O,s)=1{\displaystyle S_{\text{latency}}(O,s)=1}, meaning that the speedup is
measured with respect to the execution time after the non-parallelizable part is optimized.
Whens=∞{\displaystyle s=\infty },
If1−p=0.4{\displaystyle 1-p=0.4},O=2{\displaystyle O=2}ands=5{\displaystyle s=5}, then:
Next, we consider the case wherein the non-parallelizable part is reduced by a factor ofO′{\displaystyle O'}, and the parallelizable part is correspondingly increased. Then
It follows from Amdahl's law that the speedup due to parallelism is given by
Amdahl's law is often conflated with thelaw of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates law of diminishing returns. If one picks optimally (in terms of the achieved speedup) what is to be improved, then one will see monotonically decreasing improvements as one improves. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, one can see an increase in the return. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or require larger development time than others.
Amdahl's law does represent the law of diminishing returns if one is considering what sort of return one gets by adding more processors to a machine, if one is running a fixed-size computation that will use all available processors to their capacity. Each new processor added to the system will add less usable power than the previous one. Each time one doubles the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of 1/(1 −p).
This analysis neglects other potential bottlenecks such asmemory bandwidthand I/O bandwidth. If these resources do not scale with the number of processors, then merely adding processors provides even lower returns.
An implication of Amdahl's law is that to speed up real applications which have both serial and parallel portions,heterogeneous computingtechniques are required.[12]There are novel speedup and energy consumption models based on a more general representation of heterogeneity, referred to as the normal form heterogeneity, that support a wide range of heterogeneous many-core architectures. These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels.[13][14]
|
https://en.wikipedia.org/wiki/Amdahl%27s_law
|
Asuperscalar processor(ormultiple-issue processor[1]) is aCPUthat implements a form ofparallelismcalledinstruction-level parallelismwithin a single processor.[2]In contrast to ascalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute or start executing more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to differentexecution unitson the processor. It therefore allows morethroughput(the number of instructions that can be executed in a unit of time which can even be less than 1) than would otherwise be possible at a givenclock rate. Each execution unit is not a separate processor (or a core if the processor is amulti-core processor), but an execution resource within a single CPU such as anarithmetic logic unit.
While a superscalar CPU is typically alsopipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former (superscalar) executes multiple instructions in parallel by using multiple execution units, whereas the latter (pipeline) executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. In the "Simple superscalar pipeline" figure, fetching two instructions at the same time is superscaling, and fetching the next two before the first pair has been written back is pipelining.
The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU):
Seymour Cray'sCDC 6600from 1964, while not capable of issuing multiple instructions per cycle, is often cited as an early influence to modern superscalar processors for its ability to execute instructions simultaneously through multiple functional units. The 1967IBM System/360 Model 91, was another early influence that introduced out-of-order execution, pioneering use ofTomasulo's algorithm.[3]TheIntel i960CA (1989),[4]theAMD 29000-series 29050 (1990), and the MotorolaMC88110(1991),[5]microprocessors were the first commercial single-chip superscalar microprocessors.RISCmicroprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units and the traditional uniformity of the instruction set favors superscalar dispatch (this was why RISC designs were faster thanCISCdesigns through the 1980s and into the 1990s, and it's far more complicated to do multiple dispatch when instructions have variable bit length).
Except for CPUs used inlow-powerapplications,embedded systems, andbattery-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar.
TheP5 Pentiumwas the first superscalar x86 processor; theNx586,P6 Pentium ProandAMD K5were among the first designs which decodex86-instructions asynchronously into dynamicmicrocode-likemicro-opsequences prior to actual execution on a superscalarmicroarchitecture; this opened up for dynamic scheduling of bufferedpartialinstructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplifiedspeculative executionand allowed higher clock frequencies compared to designs such as the advancedCyrix 6x86.
The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by avector processoroperates simultaneously on many data items. An analogy is the difference betweenscalarand vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently.
Superscalar CPU design emphasizes improving the instruction dispatcher accuracy and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have twoALUsand a singleFPU, a later design such as thePowerPC 970includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design.
A superscalar processor usually sustains an execution rate in excess of oneinstruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, sincepipelined,multiprocessorormulti-corearchitectures also achieve that, but with different methods.
In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned as having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.
Most modern superscalar CPUs also have logic to reorder the instructions to try to avoid pipeline stalls and increase parallel execution.
Available performance improvement from superscalar techniques is limited by three key areas:
Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructionsa = b + c; d = e + fcan be run in parallel because none of the results depend on other calculations. However, the instructionsa = b + c; b = e + fmight not be runnable in parallel, depending on the order in which the instructions complete while they move through the units.
Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results.
No matter how advanced thesemiconductor processor how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units (e.g. ALUs), the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively thepower consumption, complexity and gate delay costs limit the achievable superscalar speedup.
However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation.
Collectively, these limits drive investigation into alternative architectural changes such asvery long instruction word(VLIW),explicitly parallel instruction computing(EPIC),simultaneous multithreading(SMT), andmulti-core computing.
With VLIW, the burdensome task of dependency checking byhardware logicat run time is removed and delegated to thecompiler.Explicitly parallel instruction computing(EPIC) is like VLIW with extra cache prefetching instructions.
Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. The fact that they are independent means that we know that the instruction of one thread can be executed out of order and/or in parallel with the instruction of a different one. Also, one independent thread will not produce a pipeline bubble in the code stream of a different one, for example, due to a branch.
Superscalar processors differ frommulti-core processorsin that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as theALU,integermultiplier, integer shifter,FPU, etc. There may be multiple versions of each execution unit to enable the execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions frommultiplethreads, one thread perprocessing unit(called "core"). It also differs from apipelined processor, where the multiple instructions can concurrently be in various stages of execution,assembly-linefashion.
The various alternative techniques are not mutually exclusive—they can be (and frequently are) combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also includevectorcapability.
|
https://en.wikipedia.org/wiki/Superscalar
|
Ininformation technologyandcomputer science, especially in the fields ofcomputer programming,operating systems,multiprocessors, anddatabases,concurrency controlensures that correct results forconcurrentoperations are generated, while getting those results as quickly as possible.
Computer systems, bothsoftwareandhardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (inmemoryorstorage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, andtheoriesto maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in aconcurrent algorithmcompared to the simplersequential algorithm.
For example, a failure in concurrency control can result indata corruptionfromtorn read or write operations.
Comments:
Concurrency control inDatabase management systems(DBMS; e.g.,Bernstein et al. 1987,Weikum and Vossen 2001), othertransactionalobjects, and related distributed applications (e.g.,Grid computingandCloud computing) ensures thatdatabase transactionsare performedconcurrentlywithout violating thedata integrityof the respectivedatabases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency controltheoryfor database systems is outlined in the references mentioned above:serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions overabstract data typesis presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis andinsight. To some extent they are complementary, and their merging may be useful.
To ensure correctness, a DBMS usually guarantees that onlyserializabletransaction schedulesare generated, unlessserializabilityisintentionally relaxedto increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have therecoverability(from abort) property. A DBMS also guarantees that no effect ofcommittedtransactions is lost, and no effect ofaborted(rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by theACIDrules below. As databases have becomedistributed, or needed to cooperate in distributed environments (e.g.,Federated databasesin the early 1990, andCloud computingcurrently), the effective distribution of concurrency control mechanisms has received special attention.
The concept of adatabase transaction(oratomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, andrecoveryfrom a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading adatabase object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs):
The concept of atomic transaction has been extended during the years to what has becomeBusiness transactionswhich actually implement types ofWorkflowand are not atomic. However also such enhanced transactions typically utilize atomic transactions as components.
If transactions are executedserially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as:
Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently.
The main categories of concurrency control mechanisms are:
Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance.
The mutual blocking between two transactions (where each one blocks the other) or more results in adeadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low.
Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories.
Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods,[1]which have each many variants, and in some cases may overlap or be combined, are:
Other major concurrency control types that are utilized in conjunction with the methods above include:
The most common mechanism type in database systems since their early days in the 1970s has beenStrong strict Two-phase locking(SS2PL; also calledRigorous schedulingorRigorous 2PL) which is a special case (variant) ofTwo-phase locking(2PL). It is pessimistic. In spite of its long name (for historical reasons) the idea of theSS2PLmechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these SS2PL (or Rigorous) schedules have the SS2PL (or Rigorousness) property.
Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions aredistributedoverprocesses,computers, andcomputer networks. Other subjects that may affect concurrency control arerecoveryandreplication.
For correctness, a common major goal of most concurrency control mechanisms is generatingscheduleswith theSerializabilityproperty. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere.Serializabilityof a schedule means equivalence (in the resulting database values) to someserialschedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level ofisolationamongdatabase transactions, and the major correctness criterion for concurrent transactions. In some cases compromised,relaxed formsof serializability are allowed for better performance (e.g., the popularSnapshot isolationmechanism) or to meetavailabilityrequirements in highly distributed systems (seeEventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed formoneytransactions, since by relaxation money can disappear, or appear from nowhere).
Almost all implemented concurrency control mechanisms achieve serializability by providingConflict serializability, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently.
Concurrency control typically also ensures theRecoverabilityproperty of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons).Recoverability(from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability isStrictness, which allows efficient database recovery from failure (but excludes optimistic implementations.
With the fast technological development of computing the difference between local and distributed computing over low latencynetworksorbusesis blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., incomputer clustersandmulti-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well.
All systems are prone to failures, and handlingrecoveryfrom failure is a must. The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery. For example, the Strictness property (mentioned in the sectionRecoverabilityabove) is often desirable for an efficient recovery.
For high availability database objects are oftenreplicated. Updates of replicas of a same database object need to be kept synchronized. This may affect the way concurrency control is done (e.g., Gray et al. 1996[2]).
Multitaskingoperating systems, especiallyreal-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task ofconcurrent computingis to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such asdeadlock. Other solutions areNon-blocking algorithmsandRead-copy-update.
|
https://en.wikipedia.org/wiki/Concurrency_control#Concurrency_control_mechanisms
|
In the fields ofdatabasesandtransaction processing(transaction management), aschedule(orhistory) of a system is an abstract model to describe the order ofexecutionsin a set of transactions running in the system. Often it is alistof operations (actions) ordered by time, performed by a set oftransactionsthat are executed together in the system. If the order in time between certain operations is not determined by the system, then apartial orderis used. Examples of such operations are requesting a read operation, reading, writing, aborting,committing, requesting alock, locking, etc. Often, only a subset of the transaction operation types are included in a schedule.
Schedules are fundamental concepts in databaseconcurrency controltheory. In practice, most general purpose database systems employ conflict-serializable and strict recoverable schedules.
Grid notation:
Operations (a.k.a., actions):
Alternatively, a schedule can be represented with adirected acyclic graph(or DAG) in which there is an arc (i.e.,directed edge) between eachordered pairof operations.
The following is an example of a schedule:
In this example, the columns represent the different transactions in the schedule D. Schedule D consists of three transactions T1, T2, T3. First T1 Reads and Writes to object X, and then Commits. Then T2 Reads and Writes to object Y and Commits, and finally, T3 Reads and Writes to object Z and Commits.
The schedule D above can be represented as list in the following way:
D = R1(X) W1(X) Com1 R2(Y) W2(Y) Com2 R3(Z) W3(Z) Com3
Usually, for the purpose of reasoning about concurrency control in databases, an operation is modelled asatomic, occurring at a point in time, without duration. Real executed operations always have some duration.
Operations of transactions in a schedule can interleave (i.e., transactions can be executedconcurrently), but time orders between operations in each transaction must remain unchanged. The schedule is inpartial orderwhen the operations of transactions in a schedule interleave (i.e., when the schedule is conflict-serializable but not serial). The schedule is intotal orderwhen the operations of transactions in a schedule do not interleave (i.e., when the schedule is serial).
Acomplete scheduleis one that contains either an abort (a.k.a.rollback)or commit action for each of its transactions. A transaction's last action is either to commit or abort. To maintainatomicity, a transaction must undo all its actions if it is aborted.
A schedule isserialif the executed transactions are non-interleaved (i.e., a serial schedule is one in which no transaction starts until a running transaction has ended).
Schedule D is an example of a serial schedule:
A schedule isserializableif it is equivalent (in its outcome) to a serial schedule.
In schedule E, the order in which the actions of the transactions are executed is not the same as in D, but in the end, E gives the same result as D.
Serializability is used to keep the data in the data item in a consistent state. It is the major criterion for the correctness of concurrent transactions' schedule, and thus supported in all general purpose database systems. Schedules that are not serializable are likely to generate erroneous outcomes; which can be extremely harmful (e.g., when dealing with money within banks).[1][2][3]
If any specific order between some transactions is requested by an application, then it is enforced independently of the underlying serializability mechanisms. These mechanisms are typically indifferent to any specific order, and generate some unpredictablepartial orderthat is typically compatible with multiple serial orders of these transactions.
Two actions are said to be in conflict (conflicting pair) if and only if all of the 3 following conditions are satisfied:
Equivalently, two actions are considered conflicting if and only if they arenoncommutative. Equivalently, two actions are considered conflicting if and only if they are aread-write,write-read, orwrite-writeconflict.
The following set of actions is conflicting:
While the following sets of actions are not conflicting:
Reducing conflicts, such as through commutativity, enhances performance because conflicts are the fundamental cause of delays and aborts.
The conflict ismaterializedif the requested conflicting operation is actually executed: in many cases a requested/issued conflicting operation by a transaction is delayed and even never executed, typically by alockon the operation's object, held by another transaction, or when writing to a transaction's temporary private workspace and materializing, copying to the database itself, upon commit; as long as a requested/issued conflicting operation is not executed upon the database itself, the conflict isnon-materialized; non-materialized conflicts are not represented by an edge in the precedence graph.
The schedules S1 and S2 are said to be conflict-equivalent if and only if both of the following two conditions are satisfied:
Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting operations (whether adjacent or not) while maintaining the order of actions for each transaction.[4]
Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting adjacent operations with different transactions.[7]
A schedule is said to beconflict-serializablewhen the schedule is conflict-equivalent to one or more serial schedules.
Equivalently, a schedule is conflict-serializable if and only if itsprecedence graphis acyclic when only committed transactions are considered. Note that if the graph is defined to also include uncommitted transactions, then cycles involving uncommitted transactions may occur without conflict serializability violation.
The schedule K is conflict-equivalent to the serial schedule <T1,T2>, but not <T2,T1>.
Conflict serializability can be enforced by restarting any transaction within the cycle in the precedence graph, or by implementingtwo-phase locking,timestamp ordering, orserializable snapshot isolation.[8]
Two schedules S1 and S2 are said to be view-equivalent when the following conditions are satisfied:
Additionally, two view-equivalent schedules must involve the same set of transactions such that each transaction has the same actions in the same order.
In the example below, the schedules S1 and S2 are view-equivalent, but neither S1 nor S2 are view-equivalent to the schedule S3.
The conditions for S3 to be view-equivalent to S1 and S2 were not satisfied at the corresponding superscripts for the following reasons:
To quickly analyze whether two schedules are view-equivalent, write both schedules as a list with each action's subscript representing which view-equivalence condition they match. The schedules are view equivalent if and only if all the actions have the same subscript (or lack thereof) in both schedules:
A schedule isview-serializableif it is view-equivalent to some serial schedule. Note that by definition, all conflict-serializable schedules are view-serializable.
Notice that the above example (which is the same as the example in the discussion of conflict-serializable) is both view-serializable and conflict-serializable at the same time. There are however view-serializable schedules that are not conflict-serializable: those schedules with a transaction performing ablind write:
The above example is not conflict-serializable, but it is view-serializable since it has a view-equivalent serial schedule <T1,| T2,| T3>.
Since determining whether a schedule is view-serializable isNP-complete, view-serializability has little practical interest.[citation needed]
In arecoverable schedule, transactions only commit after all transactions whose changes they read have committed. A schedule becomesunrecoverableif a transactionTi{\displaystyle T_{i}}reads and relies on changes from another transactionTj{\displaystyle T_{j}}, and thenTi{\displaystyle T_{i}}commits andTj{\displaystyle T_{j}}aborts.
These schedules are recoverable. The schedule F is recoverable because T1 commits before T2, that makes the value read by T2 correct. Then T2 can commit itself. In the F2 schedule, if T1 aborted, T2 has to abort because the value of A it read is incorrect. In both cases, the database is left in a consistent state.
Schedule J is unrecoverable because T2 committed before T1 despite previously reading the value written by T1. Because T1 aborted after T2 committed, the value read by T2 is wrong. Because a transaction cannot be rolled-back after it commits, the schedule is unrecoverable.
Cascadeless schedules(a.k.a, "Avoiding Cascading Aborts (ACA) schedules") are schedules which avoid cascading aborts by disallowingdirty reads.Cascading abortsoccur when one transaction's abort causes another transaction to abort because it read and relied on the first transaction's changes to an object. Adirty readoccurs when a transaction reads data from uncommitted write in another transaction.[9]
The following examples are the same as the ones in the discussion on recoverable:
In this example, although F2 is recoverable, it does not avoid
cascading aborts. It can be seen that if T1 aborts, T2 will have to
be aborted too in order to maintain the correctness of the schedule
as T2 has already read the uncommitted value written by T1.
The following is a recoverable schedule which avoids cascading abort. Note, however, that the update of A by T1 is always lost (since T1 is aborted).
Note that this Schedule would not be serializable if T1 would be committed.
Cascading aborts avoidance is sufficient but not necessary for a schedule to be recoverable.
A schedule isstrictif for any two transactions T1, T2, if a write operation of T1 precedes aconflictingoperation of T2 (either read or write), then the commit or abort event of T1 also precedes that conflicting operation of T2. For example, the schedule F3 above is strict.
Any strict schedule is cascade-less, but not the converse. Strictness allows efficient recovery of databases from failure.
The following expressions illustrate the hierarchical (containment) relationships betweenserializabilityandrecoverabilityclasses:
TheVenn diagram(below) illustrates the above clauses graphically.
|
https://en.wikipedia.org/wiki/Database_transaction_schedule#Serializable
|
OpenACC(foropen accelerators) is a programming standard forparallel computingdeveloped byCray, CAPS,NvidiaandPGI. The standard is designed to simplify parallel programming ofheterogeneousCPU/GPUsystems.[1]
As inOpenMP, the programmer can annotateC,C++andFortransource codeto identify the areas that should be accelerated usingcompiler directivesand additional functions.[2]Like OpenMP 4.0 and newer, OpenACC can target both theCPUandGPUarchitectures and launch computational code on them.
OpenACC members have worked as members of the OpenMP standard group to merge into OpenMP specification to create a common specification which extends OpenMP to support accelerators in a future release of OpenMP.[3][4]These efforts resulted in a technical report[5]for comment and discussion timed to include the annualSupercomputing Conference(November 2012,Salt Lake City) and to address non-Nvidia accelerator support with input from hardware vendors who participate in OpenMP.[6]
At ISC’12 OpenACC was demonstrated to work onNvidia,AMDandIntelaccelerators, without performance data.[7]
On November 12, 2012, at the SC12 conference, a draft of the OpenACC version 2.0 specification was presented.[8]New suggested capabilities include new controls over data movement (such as better handling ofunstructured dataand improvements in support for non-contiguous memory), and support for explicit function calls and separate compilation (allowing the creation and reuse of libraries of accelerated code). OpenACC 2.0 was officially released in June 2013.[9]
Version 2.5 of the specification was released in October 2015,[10]while version 2.6 was released in November 2017.[11]Subsequently, version 2.7 was released in November 2018.[12]
The latest version is version 3.3, which was released in November 2022.[13]
Support of OpenACC is available in commercial compilers from PGI (from version 12.6), and (for Cray hardware only) Cray.[7][14]
OpenUH[15]is anOpen64based open source OpenACC compiler supporting C and FORTRAN, developed by HPCTools group fromUniversity of Houston.
OpenARC[16]is an open source C compiler developed atOak Ridge National Laboratoryto support all features in the OpenACC 1.0 specification. An experimental[17]open source compiler, accULL, is developed by theUniversity of La Laguna(C languageonly).[18]
Omni Compiler[19][20]is an open source compiler developed at HPCS Laboratory. ofUniversity of Tsukubaand Programming Environment Research Team ofRIKENCenter for Computational Science, Japan, supported OpenACC,XcalableMP[ja]andXcalableACC[ja]combining XcalableMP and OpenACC.
IPMACC[21]is an open source C compiler developed byUniversity of Victoriathat translates OpenACC to CUDA, OpenCL, and ISPC. Currently, only following directives are supported:data,kernels,loop, andcache.
GCCsupport for OpenACC was slow in coming.[22]A GPU-targeting implementation from Samsung was announced in September 2013; this translated OpenACC 1.1-annotated code toOpenCL.[17]The announcement of a "real" implementation followed two months later, this time from NVIDIA and based on OpenACC 2.0.[23]This sparked some controversy, as the implementation would only target NVIDIA's ownPTXassembly language, for which no open source assembler or runtime was available.[24][25]Experimental support for OpenACC/PTX did end up in GCC as of version 5.1. GCC6 and GCC7 release series include a much improved implementation of the OpenACC 2.0a specification.[26][27]GCC 9.1 offers nearly complete OpenACC 2.5 support.[28]
In a way similar toOpenMP3.x on homogeneous system or the earlierOpenHMPP, the primary mode of programming in OpenACC is directives.[29]The specifications also include aruntime librarydefining several support functions. To exploit them, user should include "openacc.h" in C or "openacc_lib.h" in Fortran;[30]and then callacc_init()function.
OpenACC defines an extensive list of pragmas (directives),[31]for example:
Both are used to define parallel computation kernels to be executed on the accelerator, using distinct semantics[32][33]
Is the main directive to define and copy data to and from the accelerator.
Is used to define the type of parallelism in aparallelorkernelsregion.
There are some runtimeAPIfunctions defined too:acc_get_num_devices(),acc_set_device_type(),acc_get_device_type(),acc_set_device_num(),acc_get_device_num(),acc_async_test(),acc_async_test_all(),acc_async_wait(),acc_async_wait_all(),acc_init(),acc_shutdown(),acc_on_device(),acc_malloc(),acc_free().
OpenACC generally takes care of work organisation for the target device however this can be overridden through the use of gangs and workers. A gang consists of workers and operates over a number of processing elements (as with a workgroup in OpenCL).
|
https://en.wikipedia.org/wiki/OpenACC
|
Input/output completion port(IOCP) is anAPIfor performing multiple simultaneousasynchronous input/outputoperations inWindows NTversions 3.5 and later,[1]AIX[2]and onSolaris10 and later.[3]An input/output completion port object is created and associated with a number ofsocketsorfile handles. When I/O services are requested on the object, completion is indicated by amessagequeued to the I/O completion port. A process requesting I/O services is not notified of completion of the I/O services, but instead checks the I/O completion port'smessage queueto determine the status of its I/O requests. The I/O completion port manages multiplethreadsand theirconcurrency.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Input/output_completion_port
|
TheC10k problemis the problem of optimizingnetwork socketsto handle a large number of clients at the same time.[1]The name C10k is anumeronymforconcurrentlyhandling ten thousand connections.[2]Handling many concurrent connections is a different problem from handling manyrequests per second: the latter requires high throughput (processing them quickly), while the former does not have to be fast, but requires efficient scheduling of connections.
The problem of socket server optimisation has been studied because a number of factors must be considered to allow a web server to support many clients. This can involve a combination ofoperating systemconstraints and web server software limitations. According to the scope of services to be made available and the capabilities of the operating system as well as hardware considerations such as multi-processing capabilities, a multi-threading model or asingle threadingmodel can be preferred. Concurrently with this aspect, which involves considerations regarding memory management (usually operating system related), strategies implied relate to the very diverse aspects of I/O management.[2]
The termC10kwas coined in 1999 by software engineer Dan Kegel,[3][4]citing theSimtelFTP host,cdrom.com, serving 10,000 clients at once over 1gigabit per secondEthernetin that year.[1]The term has since been used for the general issue of large number of clients, with similar numeronyms for larger number of connections, most recently "C10M" in the 2010s to refer to 10 million concurrent connections.[5]
By the early 2010s millions of connections on a single commodity 1U rackmount server became possible: over 2 million connections (WhatsApp, 24 cores, usingErlangonFreeBSD)[6][7]and 10–12 million connections (MigratoryData, 12 cores, usingJavaonLinux).[5][8]
Common applications of very high numbers of connections include general public servers that have to serve thousands or even millions of users at a time, such asfile servers,FTP servers,proxy servers,web servers, andload balancers.[9][5]
|
https://en.wikipedia.org/wiki/C10k_problem
|
Incomputer science, atask,joborprocessis said to beCPU-bound(orcompute-bound) when the time it takes for it to complete is determined principally by the speed of thecentral processor. The term can also refer to the condition acomputerrunning such a workload is in, in which its processor utilization is high, perhaps at 100% usage for many seconds or minutes, andinterruptsgenerated byperipheralsmay be processed slowly or be indefinitely delayed.[citation needed]
CPU-bound jobs will spend most of theirexecution timeon actual computation ("number crunching"[1]) as opposed to e.g. communicating with and waiting for peripherals such asnetworkorstorage devices(which would make themI/O boundinstead). Such jobs can often benefit fromparallelizationtechniques such asmultithreadingif the underlyingalgorithmis amenable to it, allowing them to distribute their workload among multipleCPU coresand be limited by its multi-core rather than single-core performance.
The concept of CPU-bounding was developed during early computers, when data paths between computer components were simpler, and it was possible to visually see one component working while another was idle. Example components were CPU, tape drives, hard disks, card-readers, and printers. Computers that predominantly used peripherals were characterized asI/O bound. Establishing that a computer is frequently CPU-bound implies that upgrading the CPU or optimizing code will improve the overall computer performance.
With the advent of multiple buses, parallel processing,multiprogramming,preemptivescheduling, advancedgraphics cards, advancedsound cardsand generally, more decentralized loads, it became less likely to identify one particular component as always being abottleneck. It is likely that a computer's bottleneck shifts rapidly between components. Furthermore, in modern computers it is possible to have 100% CPU utilization with minimal impact to another component. Finally, tasks required of modern computers often emphasize quite different components, so that resolving a bottleneck for one task may not affect the performance of another. For these reasons, upgrading a CPU does not always have a dramatic effect. The concept of being CPU-bound is now one of many factors considered in modern computing performance.
|
https://en.wikipedia.org/wiki/CPU-bound
|
Memory boundrefers to a situation in which the time to complete a givencomputational problemis decided primarily by the amount of freememoryrequired to hold the workingdata. This is in contrast to algorithms that arecompute-bound, where the number of elementary computation steps is the deciding factor.
Memory and computation boundaries can sometimes be traded against each other, e.g. by saving and reusing preliminary results or usinglookup tables.
Memory-boundfunctionsand memory functions are related in that both involve extensive memory access, but a distinction exists between the two.
Memory functions use adynamic programmingtechnique calledmemoizationin order to relieve the inefficiency ofrecursionthat might occur. It is based on the simple idea of calculating and storing solutions to subproblems so that the solutions can be reused later without recalculating thesubproblemsagain. The best known example that takes advantage of memoization is analgorithmthat computes theFibonacci numbers. The followingpseudocodeuses recursion and memoization, and runs inlinear CPU time:
Compare the above to an algorithm that uses only recursion, and runs inexponentialCPU time:
While the recursive-only algorithm is simpler and more elegant than the algorithm that uses recursion and memoization, the latter has a significantly lowertime complexitythan the former.
The term "memory-bound function" has surfaced only recently and is used principally to describe a function that uses XOR and consists of a series of computations in which each computation depends on the previous computation. Whereas memory functions have long been an important actor in improving time complexity, memory-bound functions have seen far fewer applications. Recently[when?], however, scientists have proposed a method using memory-bound functions as a means to discourage spammers from abusing resources, which could be a major breakthrough in that area.
Memory-bound functions might be useful in aproof-of-work systemthat could deterspam, which has become a problem of epidemic proportions on theInternet.
In 1992, IBM research scientistsCynthia DworkandMoni Naorpublished a paper at CRYPTO 1992 titledPricing via Processing or Combatting Junk Mail,[1]suggesting a possibility of usingCPU-boundfunctions to deter abusers from sending spam. The scheme was based on the idea that computer users are much more likely to abuse a resource if the cost of abusing the resource is negligible: the underlying reason spam has become so rampant is that sending ane-mailhas minuscule cost for spammers.
Dwork and Naor proposed that spamming might be reduced by injecting an additional cost in the form of an expensiveCPUcomputation: CPU-bound functions would consume CPU resources at the sender's machine for each message, thus preventing huge amounts of spam from being sent in a short period.
The basic scheme that protects against abuses is as follows:Given a Sender, a Recipient, and an email Message. If Recipient has agreed beforehand to receive e-mail from Sender, then Message is transmitted in the usual way. Otherwise, Sender computes some functionG(Message)and sends(Message, G(Message))to Recipient. Recipient checks if what it receives from Sender is of the form(Message, G(Message)). If yes, Recipient accepts Message. Otherwise, Recipient rejects Message.
The functionG()is selected such that the verification by Recipient is relatively fast (e.g., taking a millisecond) and such that the computation by Sender is somewhat slow (involving at least several seconds). Therefore, Sender will be discouraged from sending Message to multiple recipients with no prior agreements: the cost in terms of both time and computing resources of computingG()repeatedly will become very prohibitive for a spammer who intends to send many millions of e-mails.
The major problem of using the above scheme is that fast CPUs compute much faster than slow CPUs. Further, higher-end computer systems also have sophisticated pipelines and other advantageous features that facilitate computations. As a result, a spammer with a state-of-the-art system will hardly be affected by such deterrence while a typical user with a mediocre system will be adversely affected. If a computation takes a few seconds on a newPC, it may take a minute on an old PC, and several minutes on aPDA, which might be a nuisance for users of old PCs, but probably unacceptable for users of PDAs. The disparity in client CPU speed constitutes one of the prominent roadblocks to widespread adoption of any scheme based on a CPU-bound function. Therefore, researchers are concerned with finding functions that most computer systems will evaluate at about the same speed, so that high-end systems might evaluate these functions somewhat faster than low-end systems (2–10 times faster, but not 10–100 times faster) as CPU disparities might imply. These ratios are "egalitarian" enough for the intended applications: the functions are effective in discouraging abuses and do not add a prohibitive delay on legitimate interactions, across a wide range of systems.
The new egalitarian approach is to rely on memory-bound functions. As stated before, a memory-bound function is a function whose computation time is dominated by the time spent accessing memory. A memory-bound function accesses locations in a large region of memory in an unpredictable way, in such a way that using caches are not effective. In recent years, the speed of CPU has grown drastically, but there has been comparatively small progress in developing faster main memory. Since theratiosofmemory latenciesof machines built in the last five years is typically no greater than two, and almost always less than four, the memory-bound function will be egalitarian to most systems for the foreseeable future.
|
https://en.wikipedia.org/wiki/Memory-bound_function
|
Adisplay deviceis anoutput devicefor presentation ofinformationinvisual[1]ortactileform (the latter used for example intactile electronic displaysfor blind people).[2]When the input information that is supplied has an electrical signal the display is called anelectronic display.
Common applications forelectronic visual displaysaretelevision setsorcomputer monitors.
These are the technologies used to create the various displays in use today.
Some displays can show onlydigitsoralphanumericcharacters. They are calledsegment displays, because they are composed of several segments that switch on and off to give appearance of desiredglyph. The segments are usually singleLEDsorliquid crystals. They are mostly used indigital watchesandpocket calculators. Common types areseven-segment displayswhich are used for numerals only, and alphanumericfourteen-segment displaysandsixteen-segment displayswhich can display numerals and Roman alphabet letters.
Cathode-ray tubeswere also formerly widely used.
2-dimensionaldisplays that cover a full area (usually arectangle) are also calledvideo displays, since it is the main modality of presentingvideo.
Full-area 2-dimensional displays are used in, for example:
Underlying technologies for full-area 2-dimensional displays include:
Themultiplexed displaytechnique is used to drive most display devices.
|
https://en.wikipedia.org/wiki/Display_device
|
Aperipheral device, or simplyperipheral, is an auxiliaryhardwaredevice that acomputeruses to transfer information externally.[1]A peripheral is a hardware component that is accessible to and controlled by a computer but is not a core component of the computer.
A peripheral can be categorized based on the direction in which information flows relative to the computer:
Many modern electronic devices, such as Internet-enableddigital watches,video game consoles,smartphones, andtablet computers, have interfaces for use as a peripheral.
This electronics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Peripheral
|
Lists of record labelscoverrecord labels, brands or trademarks associated with marketing of music recordings and music videos. The lists are organized alphabetically, by genre, by company and by location.
Record production portal
|
https://en.wikipedia.org/wiki/List_of_record_labels
|
Network,networkingandnetworkedmay refer to:
|
https://en.wikipedia.org/wiki/Network_(disambiguation)
|
The following tables compare general and technical information for a number offile archivers. Please see the individual products' articles for further information. They are neither all-inclusive nor are some entries necessarily up to date. Unless otherwise specified in the footnotes section, comparisons are based on the stable versions—withoutadd-ons, extensions or external programs.
Basic general information about the archivers.
Legend:Free/no costPaidCost dependsOpen source(licenses)Proprietary
Notes:
Theoperating systemsthe archivers can run on without emulation or compatibility layer. Ubuntu's own GUIArchive manager,for example, can open and create many archive formats (including Rar archives) even to the extent of splitting into parts and encryption and ability to be read by thenative program. This is presumably a "compatibility layer."
Notes:
Information about what common archiver features are implemented natively (without third-party add-ons).
Notes:
Information about whatarchive formatsthe archivers can read. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
Information about whatarchive formatsthe archivers[a]can write and create. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
PeaZip has full support for Brotli, Zstandard, various LPAQ andPAQformats, QUAD / BALZ / BCM (highly efficientROLZbased compressors),FreeArcformat, and for its nativePEAformat.
7-Zip includes read support for.msi,cpioandxar, plus Apple'sdmg/HFSdisk images and thedeb/.rpmpackage distribution formats; beta versions (9.07 onwards) have full support for the LZMA2-compressed .xzformat.[50]
|
https://en.wikipedia.org/wiki/Comparison_of_file_archivers
|
This is a list of file formats used byarchiversandcompressorsused to createarchive files.
Archive formats are used for backups, mobility, andarchiving. Many archive formatscompressthe data to consume less storage space and result in quicker transfer times as the same data is represented by fewer bytes. Another benefit is that files are combined into one archive file which has less overhead for managing or transferring. There are numerouscompression algorithmsavailable to losslessly compress archived data; some algorithms are designed to work better (smaller archive or faster compression) with particular data types. Archive formats are used bymost operating systemstopackagesoftware for easier distribution and installation than binaryexecutables.
DOS
Made obsolete by the introduction ofAppleDouble-encoded 7z archives (Macintosh only).
.pak was also briefly used by the short lived MSDOS PKPAK program.
|
https://en.wikipedia.org/wiki/List_of_archive_formats
|
This is a list of file formats used byarchiversandcompressorsused to createarchive files.
Archive formats are used for backups, mobility, andarchiving. Many archive formatscompressthe data to consume less storage space and result in quicker transfer times as the same data is represented by fewer bytes. Another benefit is that files are combined into one archive file which has less overhead for managing or transferring. There are numerouscompression algorithmsavailable to losslessly compress archived data; some algorithms are designed to work better (smaller archive or faster compression) with particular data types. Archive formats are used bymost operating systemstopackagesoftware for easier distribution and installation than binaryexecutables.
DOS
Made obsolete by the introduction ofAppleDouble-encoded 7z archives (Macintosh only).
.pak was also briefly used by the short lived MSDOS PKPAK program.
|
https://en.wikipedia.org/wiki/Comparison_of_archive_formats
|
Incomputing, anapertureis a portion ofphysical addressspace (i.e.physical memory) that is associated with a particularperipheral deviceor amemoryunit. Apertures may reach external devices such asROMorRAMchips, or internal memory on theCPUitself.
Typically, a memory device attached to a computer accepts addresses starting at zero, and so a system with more than one such device would have ambiguous addressing. To resolve this, the memory logic will contain several apertureselectors, each containing a range selector and an interface to one of the memory devices.
The set of selector address ranges of the apertures are disjoint. When the CPU presents a physical address within the range recognized by an aperture, the aperture unit routes the request (with the address remapped to a zero base) to the attached device. Thus, apertures form a layer ofaddress translationbelow the level of the usual virtual-to-physical mapping.
Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Aperture_(computer_memory)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.