text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Structured Englishis the use of theEnglish languagewith thesyntaxofstructured programmingto communicate the design of a computer program to non-technical users by breaking it down into logical steps using straightforward English words. Structured English gives aims to get the benefits of both the programming logic and natural language: program logic helps to attain precision, whilst natural language helps with the familiarity of the spoken word.[1]
It is the basis of some programming languages such as SQL (Structured Query Language) "for use by people who have need for interaction with a large database but who are not trained programmers".[2]
Advanced English Structureis a limited-form "pseudocode" and consists of the following elements:
The following guidelines are used when writing Structured English:[3]
APPROVE LOAN
Though useful for planning programs, modules and routines, or describing algorithms it is less useful when numerous decisions need to be made.[4]
System processes at a lower level involve lot of computations and require more precision and clarity. This can be achieved with tools such asdecision treesordecision tables.
|
https://en.wikipedia.org/wiki/Structured_English
|
TheSimple English Wikipediais a modifiedEnglish-languageedition ofWikipediawritten primarily inBasic EnglishandLearning English.[3]It is one of sevenWikipediaswritten in anAnglic languageor English-basedpidginorcreole. The site has the stated aim of providing an encyclopedia for "people with different needs, such as students, children, adults withlearning difficulties, and people who are trying to learnEnglish."[4]
Simple English Wikipedia's basic presentation style makes it helpful for beginners learning English.[5]Its simpler word structure and syntax, while missing some nuances, can make information easier to understand when compared with the regularEnglish Wikipedia.
The Simple English Wikipedia was launched on September 18, 2001.[1][2]
In 2012,Andrew Lih, aWikipedianand author, toldNBC News' Helen A.S. Popkin that the Simple English Wikipedia does not "have a high standing in theWikipedia community", and added that it never had a clear purpose: "Is it for people under the age 14, or just a simpler version of complex articles?", wrote Popkin.[6]
Material from the Simple English Wikipedia formed the basis for One Encyclopedia per Child,[7]a project inOne Laptop per Child[8]that ended in 2014.[9]
In 2018, it was proposed for closure due to a claim that no proof exists that thetarget audiencewas catered to, but the proposal was rejected due to unjustified policies and lack of approval.[10]
As of May 2025, the site contains over 269,000 content pages. It has more than 1,597,000 registered users, of whom 1,737 have made an edit in the past month.[11]
The articles on the Simple English Wikipedia are usually shorter than theirEnglish Wikipediacounterparts, typically presenting only basic information. Tim Dowling ofThe Guardiannewspaper explained that "the Simple English version tends to stick to commonly accepted facts".[12]The interface is also more simply labeled; for instance, the "Random article" link on the English Wikipedia is replaced with a "Show any page" link; users are invited to "change" rather than "edit" pages; clicking on ared linkshows a "page not created" message rather than the usual "page does not exist".[13]The project encourages, but does not enforce, the use of a vocabulary of around 1,500 commonly used English words[3]that is based onBasic English, an 850-word controlled natural language created byCharles Kay Ogdenin the 1920s.[12]
|
https://en.wikipedia.org/wiki/Simple_English_Wikipedia
|
Combinatory categorial grammar(CCG) is an efficientlyparsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures (as opposed to dependency-based ones) and is therefore a type ofphrase structure grammar(as opposed to adependency grammar).
CCG relies oncombinatory logic, which has the same expressive power as thelambda calculus, but builds its expressions differently. The first linguistic and psycholinguistic arguments for basing the grammar on combinators were put forth bySteedmanandSzabolcsi.
More recent prominent proponents of the approach arePauline JacobsonandJason Baldridge. In these new approaches, thecombinatorB (the compositor) is useful in creating long-distance dependencies, as in "Who do you think Mary is talking about?" and the combinator W (the duplicator) is useful as the lexical interpretation of reflexive pronouns, as in "Mary talks about herself". Together with I (the identity mapping) and C (the permutator) these form a set of primitive, non-interdefinable combinators. Jacobson interprets personal pronouns as the combinator I, and their binding is aided by a complex combinator Z, as in "Mary lost her way". Z is definable using W and B.
The CCG formalism defines a number of combinators (application, composition, and type-raising being the most common). These operate on syntactically-typed lexical items, by means ofNatural deductionstyle proofs. The goal of the proof is to find some way of applying the combinators to a sequence of lexical items until no lexical item is unused in the proof. The resulting type after the proof is complete is the type of the whole expression. Thus, proving that some sequence of words is a sentence of some language amounts to proving that the words reduce to the typeS.
The syntactic type of a lexical item can be either a primitive type, such asS,N, orNP, or complex, such asS∖NP{\displaystyle S\backslash NP}, orNP/N{\displaystyle NP/N}.
The complex types, schematizable asX/Y{\displaystyle X/Y}andX∖Y{\displaystyle X\backslash Y}, denote functor types that take an argument of typeYand return an object of typeX. A forward slash denotes that the argument should appear to the right, while a backslash denotes that the argument should appear on the left. Any type can stand in for theXandYhere, making syntactic types in CCG a recursive type system.
The application combinators, often denoted by>for forward application and<for backward application, apply a lexical item with a functor type to an argument with an appropriate type. The definition of application is given as:
The composition combinators, often denoted byB>{\displaystyle B_{>}}for forward composition andB<{\displaystyle B_{<}}for backward composition, are similar to function composition from mathematics, and can be defined as follows:
The type-raising combinators, often denoted asT>{\displaystyle T_{>}}for forward type-raising andT<{\displaystyle T_{<}}for backward type-raising, take argument types (usually primitive types) to functor types, which take as their argument the functors that, before type-raising, would have taken them as arguments.
The sentence "the dog bit John" has a number of different possible proofs. Below are a few of them. The variety of proofs demonstrates the fact that in CCG, sentences don't have a single structure, as in other models of grammar.
Let the types of these lexical items be
We can perform the simplest proof (changing notation slightly for brevity) as:
Opting to type-raise and compose some, we could get a fully incremental, left-to-right proof. The ability to construct such a proof is an argument for the psycholinguistic plausibility of CCG, because listeners do in fact construct partial interpretations (syntactic and semantic) of utterances before they have been completed.
CCGs are known to be able to generate the languageanbncndn:n≥0{\displaystyle {a^{n}b^{n}c^{n}d^{n}:n\geq 0}}(which is a non-context-freeindexed language). A grammar for this language can be found in Vijay-Shanker and Weir (1994).[1]
Vijay-Shanker and Weir (1994)[1]demonstrates thatLinear Indexed Grammars, Combinatory Categorial Grammars,Tree-adjoining Grammars, andHead Grammarsareweakly equivalentformalisms, in that they all define the same string languages. Kuhlmann et al. (2015)[2]show that this equivalence, and the ability of CCG to describeanbncndn{\displaystyle {a^{n}b^{n}c^{n}d^{n}}}, rely crucially on the ability to restrict the use of the combinatory rules to certain categories, in ways not explained above.
|
https://en.wikipedia.org/wiki/Combinatory_categorial_grammar
|
Lexical functional grammar(LFG) is aconstraint-basedgrammar frameworkintheoretical linguistics. It posits two separate levels of syntactic structure, aphrase structure grammarrepresentation of word order and constituency, and a representation of grammatical functions such as subject and object, similar todependency grammar. The development of the theory was initiated byJoan BresnanandRonald Kaplanin the 1970s, in reaction to the theory oftransformational grammarwhich was current in the late 1970s. It mainly focuses onsyntax, including its relation withmorphologyandsemantics. There has been little LFG work onphonology(although ideas fromoptimality theoryhave recently been popular in LFG research).
LFG views language as being made up of multiple dimensions of structure. Each of these dimensions is represented as a distinct structure with its own rules, concepts, and form. The primary structures that have figured in LFG research are:
For example, in the sentenceThe old woman eats the falafel, the c-structure analysis is that this is a sentence which is made up of two pieces, a noun phrase (NP) and a verb phrase (VP). The VP is itself made up of two pieces, a verb (V) and another NP. The NPs are also analyzed into their parts. Finally, the bottom of the structure is composed of the words out of which the sentence is constructed. The f-structure analysis, on the other hand, treats the sentence as being composed of attributes, which includefeaturessuch as number andtenseor functional units such assubject,predicate, orobject.
There are other structures which are hypothesized in LFG work:
The various structures can be said to bemutually constraining.
The LFG conception of linguistic structure differs fromChomskyantheories, which have always involved separate levels of constituent structure representation mapped onto each other sequentially, via transformations. The LFG approach has had particular success withnonconfigurational languages, languages in which the relation between structure and function is less direct than it is in languages like English; for this reason LFG's adherents consider it a more plausible universal model of language.
Another feature of LFG is that grammatical-function changing operations likepassivizationare relations between word forms rather than sentences. This means that the active-passive relation, for example, is a relation between two types of verb rather than two trees. Active and passive verbs involve alternative mapping of the participants to grammatical functions.
Through the positing of productive processes in the lexicon and the separation of structure and function, LFG is able to account for syntactic patterns without the use of transformations defined over syntactic structure. For example, in a sentence likeWhat did you see?, wherewhatis understood as the object ofsee, transformational grammar putswhataftersee(the usual position for objects) in "deep structure", and then moves it. LFG analyzeswhatas having two functions: question-focus and object. It occupies the position associated in English with the question-focus function, and the constraints of the language allow it to take on the object function as well.
A central goal in LFG research is to create a model of grammar with a depth which appeals to linguists while at the same time being efficientlyparsableand having the rigidity of formalism which computational linguists require. Because of this, computational parsers have been developed and LFG has also been used as the theoretical basis of variousmachine translationtools, such asAppTek's TranSphere, and the Julietta Research Group's Lekta.
|
https://en.wikipedia.org/wiki/Lexical_functional_grammar
|
Tree-adjoining grammar(TAG) is agrammar formalismdefined byAravind Joshi. Tree-adjoining grammars are somewhat similar tocontext-free grammars, but the elementary unit of rewriting is the tree rather than the symbol. Whereas context-free grammars have rules for rewriting symbols as strings of other symbols, tree-adjoining grammars have rules for rewriting the nodes of trees as other trees (seetree (graph theory)andtree (data structure)).
TAG originated in investigations by Joshi and his students into the family of adjunction grammars (AG),[1]the "string grammar" ofZellig Harris.[2]AGs handleexocentricproperties of language in a natural and effective way, but do not have a good characterization ofendocentricconstructions; the converse is true ofrewrite grammars, orphrase-structure grammar(PSG). In 1969, Joshi introduced a family of grammars that exploits this complementarity by mixing the two types of rules. A few very simple rewrite rules suffice to generate the vocabulary of strings for adjunction rules. This family is distinct from theChomsky-Schützenberger hierarchybut intersects it in interesting and linguistically relevant ways.[3]The center strings and adjunct strings can also be generated by adependency grammar, avoiding the limitations of rewrite systems entirely.[4][5]
The rules in a TAG are trees with a special leaf node known as thefoot node, which is anchored to a word.
There are two types of basic trees in TAG:initialtrees (often represented as 'α{\displaystyle \alpha }') andauxiliarytrees ('β{\displaystyle \beta }'). Initial trees represent basic valency relations, while auxiliary trees allow for recursion.[6]Auxiliary trees have the root (top) node and foot node labeled with the same symbol.
A derivation starts with an initial tree, combining via eithersubstitutionoradjunction. Substitution replaces a frontier node with another tree whose top node has the same label. The root/foot label of the auxiliary tree must match the label of the node at which it adjoins. Adjunction can thus have the effect of inserting an auxiliary tree into the center of another tree.[4]
Other variants of TAG allowmulti-component trees, trees with multiple foot nodes, and other extensions.
Tree-adjoining grammars are more powerful (in terms ofweak generative capacity) thancontext-free grammars, but less powerful thanlinear context-free rewriting systems,[7]indexed[note 1]orcontext-sensitivegrammars.
A TAG can describe the language of squares (in which some arbitrary string is repeated), and the language{anbncndn|1≤n}{\displaystyle \{a^{n}b^{n}c^{n}d^{n}|1\leq n\}}. This type of processing can be represented by anembedded pushdown automaton.
Languages with cubes (i.e. triplicated strings) or with more than four distinct character strings of equal length cannot be generated by tree-adjoining grammars.
For these reasons, tree-adjoining grammars are often described asmildly context-sensitive.
These grammar classes are conjectured to be powerful enough to modelnatural languageswhile remaining efficientlyparsablein the general case.[8]
Vijay-Shanker and Weir (1994)[9]demonstrate thatlinear indexed grammars,combinatory categorial grammar, tree-adjoining grammars, andhead grammarsareweakly equivalentformalisms, in that they all define the same string languages.
Lexicalized tree-adjoining grammars (LTAG) are a variant of TAG in which each elementary tree (initial or auxiliary) is associated with a lexical item. A lexicalized grammar for English has been developed by the XTAG Research Group of the Institute for Research in Cognitive Science at the University of Pennsylvania.[5]
|
https://en.wikipedia.org/wiki/Tree-adjoining_grammar
|
In linguistics,co-occurrenceorcooccurrenceis an above-chance frequency of orderedoccurrenceof two adjacenttermsin atext corpus. Co-occurrence in thislinguisticsense can be interpreted as an indicator ofsemantic proximityor anidiomaticexpression. Corpus linguistics and its statistic analyses reveal patterns of co-occurrences within a language and enable to work out typicalcollocationsfor its lexical items. Aco-occurrence restrictionis identified when linguistic elements never occur together. Analysis of these restrictions can lead to discoveries about thestructureand development of a language.[1]
Co-occurrence can be seen an extension ofword countingin higher dimensions. Co-occurrence can be quantitatively described using measures like a massivecorrelationormutual information.
Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Co-occurrence
|
Inlinguistics,statistical semanticsapplies the methods ofstatisticsto the problem of determining the meaning of words or phrases, ideally throughunsupervised learning, to a degree of precision at least sufficient for the purpose ofinformation retrieval.
The termstatistical semanticswas first used byWarren Weaverin his well-known paper onmachine translation.[1]He argued thatword-sense disambiguationfor machine translation should be based on theco-occurrencefrequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated byJ. R. Firth.[2]This assumption is known inlinguisticsas thedistributional hypothesis.[3]Emile Delavenay definedstatistical semanticsas the "statistical study of the meanings of words and their frequency and order of recurrence".[4]"Furnaset al. 1983" is frequently cited as a foundational contribution to statistical semantics.[5]An early success in the field waslatent semantic analysis.
Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects ofsemantics, by applying statistical techniques tolarge corpora:
Statistical semantics focuses on the meanings of common words and the relations between common words, unliketext mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield ofcomputational semantics, which is in turn a subfield ofcomputational linguisticsandnatural language processing.
Many of the applications of statistical semantics (listed above) can also be addressed bylexicon-based algorithms, instead of thecorpus-based algorithms of statistical semantics. One advantage of corpus-based algorithms is that they are typically not as labour-intensive as lexicon-based algorithms. Another advantage is that they are usually easier to adapt to new languages or noisier new text types from e.g. social media than lexicon-based algorithms are.[21]However, the best performance on an application is often achieved by combining the two approaches.[22]
|
https://en.wikipedia.org/wiki/Statistical_semantics
|
Anki(US:/ˈɑːŋki/,UK:/ˈæŋki/;Japanese:[aŋki]) is afree and open-sourceflashcardprogram. It uses techniques fromcognitive sciencesuch asactive recall testingandspaced repetitionto aid the user in memorization.[4][5]The name comes from theJapaneseword for "memorization" (暗記).[6]
TheSM-2algorithm, created forSuperMemoin the late 1980s, has historically formed the basis of the spaced repetition methods employed in the program. Anki's implementation of thealgorithmhas been modified to allow priorities on cards and to showflashcardsin order of their urgency. Anki 23.10+ also has a native implementation of theFree Spaced Repetition Scheduler(FSRS) algorithm, which allows for more optimal spacing of card repetitions.[7]
Anki iscontent-agnostic, and the cards are presented usingHTMLand may include text, images, sounds, videos,[8]andLaTeXequations. The decks of cards, along with the user's statistics, are stored in the openSQLiteformat.
Cards are generated from information stored as "notes". Notes are analogous to database entries and can have an arbitrary number of fields. For example, with respect to learning a language, a note may have the following fields and example entries:
This example illustrates what some programs call athree-sided flashcard, but Anki's model is more general and allows any number of fields to be combined in various cards.
The user can design cards that test the information contained in each note. One card may have a question (expression) and an answer (pronunciation, meaning).
By keeping the separate cards linked to the same fact, spelling mistakes can be adjusted against all cards at the same time, and Anki can ensure that related cards are not shown in too short a spacing.
A special note type allows the generation ofcloze deletioncards. In Anki 1.2.x, those were ordinary cards with cloze markup added using a tool in the fact editor.
Anki supports synchronization with a free andproprietaryonline service called AnkiWeb.[9]This allows users to keep decks synchronized across multiple computers and to study online or on a cell phone.
There also is a third-party open-source (AGPLv3) AnkiWeb alternative, called anki-sync-server,[10]which users can run on their own local computers or servers.
Anki 2.1.57+ includes a built-in sync server. Advanced users who cannot or do not wish to use AnkiWeb can use this sync server instead of AnkiWeb.[11]
Anki can automatically fill in the reading of Japanese and Chinese text. Since version 0.9.9.8.2, these features are in separate plug-ins.
More than 1600add-onsfor Anki are available,[12]often written by third-partydevelopers.[13]They provide support forspeech synthesis, enhanced user statistics, image occlusion,incremental reading, more efficient editing and creation of cards through batch editing, modifying theGUI, simplifying import of flashcards from other digital sources, adding an element ofgamification,[14]etc.
While Anki's user manual encourages the creation of one's own decks for most material, there is still a large and active database of shared decks that users can download and use.[15]Available decks range from foreign-language decks (often constructed with frequency tables) to geography, physics, biology, chemistry and more. Various medical science decks, often made by multiple users in collaboration, are also available.[16]
Anki's current scheduling algorithm is derived fromSM-2(an older version of theSuperMemoalgorithm), though the algorithm has been significantly changed from SM-2 and is also far more configurable.[7]One of the most apparent differences is that while SuperMemo provides users a 6-point grading system (0 through 5, inclusive), Anki only provides at most 4 grades (again, hard, good, and easy). Anki also has significantly changed how review intervals grow and shrink (making many of these aspects of the scheduler configurable through deck options), though the core algorithm is still based on SM-2's concept of ease factors as the primary mechanism of evolving card review intervals.
Anki was originally based on the SM-5 algorithm, but the implementation was found to have seemingly incorrect behaviour (harder cards would have their intervals grow more quickly than easier cards in certain circumstances) leading the authors to switch Anki's algorithm to SM-2 (which was further evolved into the modern Anki algorithm).[7]At the time, this led Elmes to claim that SM-5 and later algorithms were flawed[17]which was strongly rebutted byPiotr Woźniak, the author of SuperMemo.[18]Since then, Elmes has clarified[7]that it is possible that the flaw was due to a bug in their implementation of SM-5 (the SuperMemo website does not describe SM-5 in complete detail), but added that due to licensing requirements, Anki will not use any newer versions of the SuperMemo algorithm. The prospect of community-funded licensing of newer SuperMemo algorithms is often discussed among users. However, there exists a greater focus on the development of the software itself and its features. The latest SuperMemo algorithm in 2019 is SM-18.[19]
Some Anki users who have experimented with the Anki algorithm and its settings have published configuration recommendations,[20]made add-ons to modify Anki's algorithm,[21]or developed their own separate software.
In 2023 (version 23.10) the Free Spaced Repetition Scheduler (FSRS), a new scheduling algorithm, was integrated into Anki as an optional feature.[22]
FSRS is based on a variant of the DSR (Difficulty, Stability, Retrievability) model, which is used to predict memory states.
The default FSRS parameters are based on almost 700 million reviews from 20 thousand users and are more accurate in comparison to the standard SM2 algorithm, according to benchmarks, leading to fewer necessary reviews for the same retention rate.[23][24]
The following smartphone/tablet and Web clients are available as companions to the desktop version:[25][26][27]
The flashcards and learning progress can be synchronized both ways with Anki using AnkiWeb. With AnkiDroid it is possible to have the flashcards read in several languages usingtext-to-speech(TTS). If a language does not exist in the Android TTS engine (e.g. Russian in the Android version Ice Cream Sandwich), a different TTS engine such as SVOX TTS Classic can be used. AnkiDroid has also been used for other educational purposes. It is used as instructional media in Islamic Religious Education in Indonesia.[31]
Damien Elmes, the Australian programmer behind the app, originally created it for learning Japanese.[32][33]The oldest mention of Anki that the developer Damien Elmes could find in 2011 was dated 5 October 2006, which was thus declared Anki's birthdate.[34]
While Anki may primarily be used for language learning or a classroom setting, many have reported other uses for Anki: scientistMichael Nielsenuses it to remember complex topics in a fast-moving field,[39]while others are using it to remember memorable quotes, the faces of business partners or medical residents, or to remember business interviewing strategies.
In 2010,Roger Craigobtained the then-all-time record for single-day winnings on thequiz showJeopardy![40]after using Anki to memorize a vast number of facts.[41]
A study in 2015 atWashington University School of Medicinefound that 31% of students who responded to a medical education survey reported using Anki as a study resource; the same study found a positive relationship between the number of unique Anki cards studied andUSMLE Step 1scores in a multivariate analysis.[42]In the same year, another study showed that students had a one-point increase on their licensing exams for every 1,700 unique Anki flashcards they used.[43]
Another study in 2024 found that Anki was commonly used among American medical students. 86.2% of surveyed students reported some Anki use and 66.5% used it daily.[44]AnKing, an Anki deck developed by students at theUniversity of Utah School of Medicineaggregates information from multiple third-party resources and has become the primary method of USMLE Step1 and Step2 study for many students, having been downloaded over 300,000 times as of 2024.[45]
Anki offers user-made decks, which are commonly used in medical education and for learning a range of subjects including Chemistry, Biology, Geography, History, Law, Mathematics, Music, and Physics. User-made decks are also available for learning languages such as Arabic, Chinese, English, French, German, Hebrew, Japanese, Korean, Russian, and Spanish.[46]
|
https://en.wikipedia.org/wiki/Anki_(software)
|
Babbel GmbH,operating asBabbel,[4]is a German company operating asubscription-basedlanguage learning softwareande-learningplatform.
With 1000 employees, Babbel is headquartered in Berlin (Babbel GmbH) and has an office inNew York City, operating as Babbel Inc.[5]Babbel's app is available for web,iOSandAndroidoffering lessons in 14 languages. The company develops its learning content in-house.[6]
The company was founded in August 2007 by Thomas Holl, Toine Diepstraten, Lorenz Heine and Markus Witte.[7][8]In January 2008, the language learning platform went online with community features as a free beta version.[9]
In March 2013, Babbel acquiredSan Franciscostartup PlaySay Inc. to expand into theUnited States.[10][11]As part of the acquisition, PlaySay's founder and CEO joined Babbel as a strategic advisor.[12]Later that year, a third funding round led by Scottish Equity Partners raised another $22 million.[13][14]Other participants in this round include previous investors Reed Elsevier Ventures,Nokia Growth Partners,[15]and VC Fonds Technology Berlin.[16][17]
In 2019, co-founder Markus Witte stepped down as CEO and was replaced by Arne Schepker.[18]In March 2020, aworks councilwas elected in the Berlin office.[19]
In March 2022, Babbel provided free access codes to Ukrainian refugees, allowing those with prior knowledge of languages offered by Babbel to learn relevant languages such as German, Polish, and English.[20]In February 2023, Babbel was awarded the "CSR (Corporate Social Responsibility) Silver Anthem Award for Humanitarian Action & Services" for its efforts to help displaced people affected by theRussian invasion of Ukraine.[21]At the time, over 500,000 Ukrainians accessed Babbel courses under the program.[22]
In 2023, Babbel acquired Toucan (a language-learning browser extension). At the time, it had around 1000 full-time employees and freelancers.[5]As of October 2024, Markus Witte has returned as CEO.[23]
|
https://en.wikipedia.org/wiki/Babbel
|
The Alpheios Projectis an open source initiative originally focused on developing software to facilitate readingLatinandancient Greek. Dictionaries, grammars and inflection tables were combined in a set of web-based tools to provide comprehensive reading support for scholars, students and independent readers. The tools were implemented as browser add-ons so that they could be used on any web site or any page that a user might create in Unicoded HTML.
In collaboration with thePerseus Digital Library, the goals of the Alpheios Project were subsequently broadened to combine reading support with language learning. Annotation and editing tools were added to help users contribute to the development of new resources, such as enhanced texts that have been syntactically annotated or aligned with translations.
The Alpheios tools are designed modularly to encourage the addition of other languages that have the necessary digital resources, such as morphological analyzers and dictionaries. In addition to Latin and ancient Greek, Alpheios tools have been extended toArabicandChinese.
The Alpheios Project is a non-profit (501c3) initiative. The software is open source, and resides on Sourceforge.com. The Alpheios software is released asGPL 3.0and texts and data asCC-by-SA.
The Alpheios Project was established in 2007 by Mark Nelson, the founder of the commercial software companyOvid Technologies, which he started after writing a search engine for medical literature that became widely popular in medical libraries and research facilities (Strauch, 1996). Nelson, who holds an MA in English literature from Columbia University, sold the company toWolters Kluwerin 1999 (Quint, 1998). Nelson created Alpheios by recruiting several developers and programmers from his previous company, defining the project's initial goals and funding its first three years of operation. In 2008 he also provided the initial funding for The Perseus Treebank of Ancient Greek, which has subsequently been crowd-sourced.
In 2011, the Perseus Project hired key Alpheios staff and the activities of the projects were extensively integrated, although Alpheios remains an independent organization focused on developing adaptive reading and learning tools that can provide formative assessment customized to the individual user's special abilities and goals, including the study of specific authors or texts.
To date, all Alpheios applications, enhanced texts and code have been provided without any fees or licenses. A separate Alpheios LLC provides commercial consultation on customization and extension of the Alpheios tools.
The reading tools also contain some pedagogical features typically found in e-tutors such as morphological and lexical quizzes and games and the automatic comparison of the user's own claims about vocabulary proficiency with his recorded use of the dictionary resources.
a number of the Arabic texts that Perseus has digitized are available directly from Alpheios, including:
Supports manual diagramming of sentences in any language that has spaces or punctuation between its words, annotating the nodes and arcs as desired, and exporting as a re-usable xml document.[1]
Supports manual word or phrase alignment of a text in any language with its translation into any other language, and export as a re-usable xml document.[2]
Alpheios encourages participation by interested individuals whether or not they have current academic affiliations.[13]
|
https://en.wikipedia.org/wiki/The_Alpheios_Project
|
Community language learning(CLL) is alanguage-teaching approach[1]focused on group-interest learning.
It is based on thecounselling-approachin which the teacher acts as a counselor and aparaphraser, while the learner is seen as a client and collaborator.
The CLL approach was developed byCharles Arthur Curran, a Jesuit priest,[2]professor ofpsychologyatLoyola University Chicago, andcounselingspecialist.[3]
According to Curran, a counselor helps a client understand his or her own problems better by 'capturing the essence of the clients concern ...[and] relating [the client's] affect to cognition...'; in effect, understanding the client and responding in a detached yet considerate manner.[4]
These types of communities have recently arisen with the explosion of educational resources for language learning on the Web.
|
https://en.wikipedia.org/wiki/Community_language_learning
|
Duolingo, Inc.[b]is an Americaneducational technologycompany that produces learningappsand provideslanguage certification. Duolingo offers courses on 43 languages,[5]ranging fromEnglish,French, andSpanishto less commonly studied languages such asWelsh,Irish, andNavajo, and even constructed languages such asKlingon.[6]It also offers courses onmusic,[7]math, andchess.[8]The learning method incorporatesgamificationto motivate users with points, rewards and interactive lessons featuringspaced repetition.[9]The app promotes short, daily lessons for consistent-phased practice.
Duolingo also offers theDuolingo English Test, an onlinelanguage assessment, and Duolingo ABC, a literacy app designed for children. The company follows afreemiummodel, with optional premium services like Super Duolingo and Duolingo Max, which are ad-free and provide additional features. Additionally, Duolingo runsDuo's Taqueria, a Mexican taco restaurant in Pittsburgh.
With over 130 million monthly active users, Duolingo is the most populareducational appin the world.[10][11][12]Over 10 million people have a Duolingo streak longer than a year.[13]In total, learners on Duolingo complete more than 13 billion exercises per week.[14]Asystematic reviewofresearchon Duolingo from 2012 to 2020 found comparatively few studies on the platform's efficiency forlanguage learningbut identified several studies that reported relatively high user satisfaction, enjoyment, and positive perceptions of the app's effectiveness.[15]The company has also been recognized for its successful marketing tactics and strongbrand engagement.[16][17]
The idea of Duolingo was formulated in 2009 byCarnegie Mellon UniversityprofessorLuis von Ahnand his Swiss-born post-graduate studentSeverin Hacker.[18][19]Von Ahn had sold his second company,reCAPTCHA, toGoogleand, with Hacker, wanted to work on an education-related project.[20]Von Ahn stated that he saw how expensive it was for people in his community inGuatemalato learn English.[21][22]Hacker (co-founder and currentCTOof Duolingo) believed that "free education will really change the world"[23]and wanted to provide an accessible means for doing so. He was recognized by theNational Inventors Hall of Famefor his contributions to language learning and technological development.[24]The Duo mascot is a green owl because co-founder Severin Hacker hates the color green.[25]
The project was originally financed by von Ahn'sMacArthur fellowshipand aNational Science Foundationgrant.[26][27][28]The founders considered creating Duolingo as anonprofit organization, but von Ahn judged this model unsustainable.[23]Its early revenue stream, acrowdsourcedtranslation service, was replaced by aDuolingo English Testcertification program, advertising, and subscription.[29][30]
In October 2011, Duolingo announced that it had raised $3.3 million from aSeries A roundof funding, led byUnion Square Ventures, with participation from authorTim Ferrissand actorAshton Kutcher's investing firmA-Grade Investments.[31]Duolingo launched a private beta on November 30, 2011, and accumulated a waiting list of more than 100,000 people by December 13.[32][33]It launched to the general public on June 19, 2012, at which point the waiting list had grown to around 500,000.[34][35]
In September 2012, Duolingo announced that it had raised a further $15 million from a Series B funding round led byNew Enterprise Associates, with participation from Union Square Ventures.[36]In November 2012, Duolingo released aniPhoneapp,[37]followed by anAndroidapp in May 2013, at which time Duolingo had around 3 million users.[38]By July 2013, it had grown to 5 million users and was rated the No. 1 free education app in theGoogle Play Store.[39]
In February 2014, Duolingo announced that it had raised $20 million from a Series C funding round led byKleiner Caufield & Byers, with prior investors also participating.[40]At this time, it had 34 employees, and reported about 25 millionregistered usersand 12.5 million active users;[40]it later reported a figure closer to 60 million users.[41]
In June 2015, Duolingo announced that it had raised $45 million from a Series D funding round led byGoogle Capital, bringing its total funding to $83.3 million. The round valued the company at around $470 million, with 100 million registered users globally.[29][41]In April 2016, it was reported that Duolingo had more than 18 million monthly users.[42][43]
In July 2017, Duolingo announced that it had raised $25 million in a Series E funding round led byDrive Capital, bringing its total funding to $108.3 million. The round valued Duolingo at $700 million, and the company reported passing 200 million registered users, with 25 million active users.[44]It was reported that Duolingo had 95 employees.[45]Funds from the Series E round would be directed toward creating initiatives such as a related educational flashcard app, TinyCards, and testbeds for initiatives related to reading and listening comprehension.[46]On August 1, 2018, Duolingo surpassed 300 million registered users.[47]
In December 2019, it was announced that Duolingo raised $30 million in a Series F funding round fromAlphabet's investment company,CapitalG.[22]The round valued Duolingo at $1.5 billion. Duolingo reported 30 million active users at this time. The headcount at the company had increased to around two hundred, and new offices had been opened inSeattle,New York, andBeijing.[48]Duolingo planned to use the funds to develop new products and further expand its team in sectors like engineering, business development, design, curriculum and content creators, community outreach, and marketing.[49]
In October 2013, Duolingo launched acrowdsourcedlanguage incubator.[50]In March 2021, it announced that it would be ending its volunteer contributor program. The company said that language courses would instead be maintained and developed by professional linguists aligning withCEFR standards.[51][non-primary source needed]On June 28, 2021, Duolingo filed for aninitial public offeringonNASDAQunder the ticker DUOL.[52]From August 2021 to June 2022, the Duolingo language learning app was removed from some app stores in China.[53]
In August 2022, Duolingo overhauled itsinterface, changing its course structure from a tree-like design, where users could choose from a range of lessons after completing previous ones, to a linear progression. This update has been criticized by users across social media outlets, such asRedditandTwitter.[21]CEO Luis von Ahn stated that there were no plans to reverse the changes.[54]In October 2022, Duolingo acquired Detroit-based animation studio Gunner; it is the studio that produces art assets and animation for Duolingo and Duolingo ABC and its marketing campaigns.[citation needed]
In March 2023, Duolingo officially announced the planned Duolingo Max, a subscription tier above Super Duolingo, in their blog.[55]In October 2023, Duolingo released math and music courses in English and Spanish foriOSusers.[56][57]
In January 2024, Duolingo fired some contractors and announced plans to replace them withAI.[58][59]The company acquired Detroit-based design studio Hobbes in March.[60]
CEFRbased language courses for learners ofEnglish,Spanish,French,Italian,Chinese (Mandarin),Japanese,Korean,Portuguese, andGermanare available for all users.[61]Additional courses are also available for speakers of English (Arabic,Czech,Danish,Dutch,Esperanto,Finnish,Greek,Haitian Creole,Hawaiian,Hebrew,High Valyrian,Hindi,Hungarian,Indonesian,Irish,Klingon,Latin,Navajo,Norwegian,Polish,Romanian,Russian,Scottish Gaelic,Swahili,Swedish,Turkish,Ukrainian,Vietnamese,Welsh,Yiddish,Zulu), Chinese (Chinese (Cantonese)), Arabic (Swedish), and Spanish (Catalan,Russian,Swedish).[needs update]
As of 2014, most of Duolingo's language learning features are free with advertising. Users can remove advertising by paying a subscription fee or promoting referral links.[62]The paid user program, Super Duolingo (formerly known as Duolingo Plus), offers unlimited retries and access to some additional types of lessons. It is otherwise identical to Duolingo for Schools.[63][64][non-primary source needed]
Duolingo Max is a subscription above Super Duolingo that adds additional functions usinggenerative AI: Roleplay, an AI conversation partner; Explain My Answer, which breaks down the rules with a modifiedGPT-4when the user makes a mistake; and Video Call, where users can have video chat with one of the characters, which currently includes only Lily. Intended to provide immersion through conversation.[65][non-primary source needed]
Duolingo for Schools is designed to help teachers use Duolingo in their classrooms. It allows teachers to create classrooms, assign lessons, track student progress, and personalize learning.[66][non-primary source needed]
The Duolingo English Test (DET) is an online English proficiency test that measures proficiency in reading, writing, speaking, and listening in English. It is a computer-based test scored on a scale of 10–160, with scores above 120 considered English proficiency. The test's questions algorithmically adjust to the test-takers' ability level. The test's certificate is reportedly accepted by over 5,500 programs internationally,[67]albeit with exceptions.[68]
Duolingo Math is an app course for learningelementary mathematics. It was announced on YouTube on August 27, 2022.[69]
On October 11, 2023, the company released Duolingo Music,[57]a new platform within the existing app that provides basic music learning throughpianoandsheet musiclessons.[70][71]
Duolingo introducedchesslessons in beta in April 2025, with an initial rollout planned for iOS in English by May. The lessons are structured around theElo rating system, gradually increasing in difficulty to match the user's skill level. Learners can play mini-matches or full games against Duolingo’s virtual chess coach, Oscar, whose difficulty also scales with progress. At launch, users are not able to play against each other.[72][73]
Duolingo ABC is a free app designed for young children to learn letters, their sounds, phonics, and other early reading concepts. Released in 2020, it does not contain ads or in-app purchases. As of April 2024, iOS and Android versions are available, but only in English.[8][74]
On Duolingo, learnerslearn by doing, engaging with the course material.[75]Lessons are designed to be brief, allowing users to learn in manageable chunks.[76][77]Duolingo uses agamified approachto learning, with lessons that incorporate translating, interactive exercises, quizzes, and stories.[78]It also uses an algorithm that adapts to each learner and can provide personalized feedback and recommendations.
Duolingo provides a competitive space,[79]such as in leagues, where people can compete with randomly selected worldwide player groupings of up to 30 users. Leagues: Bronze, Silver, Gold, Sapphire, Ruby, Emerald, Amethyst, Pearl, Obsidian, Diamond. Rankings in leagues are determined by the number of experience points earned in a week. Badges in Duolingo represent achievements earned from completing specific objectives.[80]Users can also create their own avatars.[81][82]
Any lesson completed in Duolingo will count towards the user'sdaily streak.[83]Thedaily streak'svisual symbolin the app is fire. Duolingo's "Friend Streak" lets users maintain streaks with up to five friends.[84]Streaks encourage consistent daily practice and help build a habit of regular learning.
The app has a personalizedbandit algorithmsystem (later theA/B testedvariantrecovering differencesoftmax algorithm) that determines the daily notification that will be sent out to the user.[85]
The Duolingo Score is an estimate of users’ proficiency in the language they're learning in CEFR-aligned courses. Duolingo Score provides a granular assessment of what a student has learned and they can do with the language.[86]DETis using a similar scoring system. The most developed CEFR-aligned courses (French, English and Spanish) cover Duolingo Score from 0 to 130.
Duolingo operates on a freemium business model, offering free access to its learning platforms with ads. Revenue is primarily generated through subscriptions, which remove ads, and provide other perks like unlimited hearts and generative AI. The app also generates income from in-app purchases of virtual currency (Gems) and power-ups that enhance the learning experience. Another key revenue stream is the Duolingo English Test (DET), a low-cost English proficiency test.[87]
In April 2020, it passed one million paid subscribers;[88]it reached 2.9 million in March 2022,[89]and 4.8 million at the end of March 2023.[90]As of June 2024 Duolingo has 8 million paying subscribers.
Duolingo had revenue of $531 million in 2023, compared to $250.77 million in 2021,[91]$36 million in 2018,[92]$13 million in 2017,[47]and $1 million in 2016. In May 2022, it was reported that 6.8% of its monthly active users paid for the ad-free version of the app.[93]
A 2017 study found no significant difference between elementary students learning Spanish through the "gamification" of the Duolingo app and those learning in classroom environments, with both groups demonstrating a similar increase in achievements andself-efficacy.[94]
Duolingo's occasional use of 'erratic' phrases—such as "The bride is a woman and the groom is a hedgehog" or "The man eats ice cream with mustard"[95]—is reportedly derived from research published in 2018 by psychologists atGhent Universityin Belgium,[96]which concluded that such "semantically unpredictable sentences" were more effective for language learning than conventional and predictable phrases, based on the concept of "reward prediction errors", in which unexpected or surprising outcomes are more rewarding and thus encourage further learning.[97][95]
A 2022 study on adults using Duolingo as their only language learning tool, published in the journalForeign Language Annals, found that participants who completed a course had similar reading and listening proficiency to university students after four semesters of study, concluding that Duolingo could be an effective tool for language learning.[98]Another 2022 study of Malaysian students learning French, published by theNational University of MalaysiaPress, found that the app facilitated the acquisition of vocabulary and concluded that it was "well suited" for beginners in this regard.[99]
According to Duolingo's own 2021 study, five sections of the app are roughly equivalent to five semesters of university instruction, and Duolingo is an "effective tool [...] at an intermediate level".[100][101]A 2023 study funded by Duolingo concluded that Duolingo English learners did not significantly learn much grammar.[102]Duolingo English learners inColombiaandSpainwere found to gain significantly more proficiency than students in a classroom, except for listening.[103]
Some language professionals have criticized the app for its limitations and gamified design.[104]Players have also reported that "gamification" has led to cheating, hacking, and incentivized game strategies that conflict with actual learning.[105]
In March 2022, Duolingo forums were discontinued,[106]and sentence discussions became read-only.[107]The change has been criticized on some social media sites.
In January 2023, Duolingo's data on over 2.6 million users' usernames, names, and phone numbers was sold in a hacker forum. Duolingo later stated that they would investigate the "dark web post".[108]They concluded that the data was obtained by scraping publicly available information based on an exposedapplication programming interface(API).[109][110]Duolingo's spokesperson states that the API is intentionally publicly visible.
Since the end of October 2023, Duolingo has stopped updating its Welsh course to "focus on languages in higher demand". Some users criticized this decision because it came at the expense of learners of a language with limited resources on the market and the potential halting of theWelsh Government's"Cymraeg 2050" strategy to promote Welsh language learning.[111][112]
Duolingo courses vary greatly in quality. While most popular language courses like Spanish or French are well developed, other courses for less studied languages like Ukrainian cover very little grammar and vocabulary.[113][114]
In 2025, CEO Luis von Ahn announced Duolingo would become an "AI-first" company and would be replacing contracted workers withartificial intelligencethrough automation.[115][116]This decision was met with public outcry, leading many users declaring they have ended their learning streaks in protest.[117]
In 2013,Applechose Duolingo as its iPhone App of the Year, a first for an educational application.[118]That year, Duolingo ranked No.7 onFast Company's"The World's Most Innovative Companies: Education Honorees" list "for crowdsourcing web translation by turning it into a free language-learning program".[119][120][121]Duolingo won Best Education Startup at the 2014Crunchies,[122][123]and was the most downloaded 'education app' in Google Play in 2013 and 2014.[124]In July 2020,PCMagnamed it "The Best Free Language Learning App".[125]
As a company, Duolingo has likewise won several awards and recognitions. In 2015, it was announced as that year's Index Award winner in the Play & Learning category byThe Index Project.[126]It wonInc.magazine's Best Workplaces 2018,[127]madeEntrepreneurmagazine's Top Company Culture List 2018,[128]was amongCNBC's "Disruptor 50" lists for 2018 and 2019,[129][130][131]and was ranked as one ofTIMEmagazine's 50 Genius Companies.[132]Duolingo was named one ofForbes's"Next Billion-Dollar Startups 2019".[133]In 2023, Duolingo won a Design Award during the 2023 edition of theApple Design Awards.[134]
Duolingo hasbrand charactersthat are used forengagementand creating storylines.[135][136]The main characters include:[137]
All characters mentioned above are human, with the exceptions of Duo, who is an owl, and Falstaff, who is a bear.
Due to the app's frequent reminder notifications, Duolingo's mascot, a green cartoonowlnamed Duo, has been the subject ofInternet memesantagonising him, with the character often depicted stalking or threatening users if they do not continue using the app.[151][152]
Duolingo has leaned into its online reputation and has adjusted its social media and marketing strategies accordingly.[153]Acknowledging the meme, Duolingo released a video onApril Fools' Day2019, depicting a facetious new feature called "Duolingo Push". In the video, users of "Duolingo Push" are reminded to use the app by Duo himself (depicted by apersonwearing a Duolingomascot costume), who stares at and follows them until they comply.[154][155]It was also acknowledged during Duolingo's 2022 April Fools' Day video, "Lawyer Fights Duolingo Owl for $2,700,000", where a fictitious law firm fights for those that have been harmed by Duolingo's owl mascot.[156]This was further referenced by the company in its 2024 April Fools' Day skit "Duo on Ice", in which the owl, in a mix of Spanish and English, admitted to having an appetite for human flesh, and if the user failed to continue their streak, they would "eat their head like apraying mantis."[157]In February 2020, as part of the company's partnership with the developers of the video gameAngry Birds 2, a skit depicting Duo and the red Angry Bird attacking a crowd was uploaded.[158]
Duolingo has effectively engaged with Generation Alpha through itsYouTube shorts, featuring global meme trends and content like songs, workplace insights, and humor, including elements of dark comedy, and "kidnapping children" as a joke.[159]
In November 2019,Saturday Night Liveparodied Duolingo in a sketch where adults learned to communicate with children by using a fictitious course called "Duolingo for Talking to Children".[160]
The 2023 filmBarbiecontains a running gag where the husband of disgruntledMattelemployee Gloria uses Duolingo to learn Spanish, Gloria's native language.
Duo's Taqueria is ataqueria(a Mexican taco restaurant) in Pittsburgh, Pennsylvania, operated by Duolingo. The taqueria offers a variety of authentic Mexican tacos and other traditional dishes.[161]The restaurant encourages patrons to order in Spanish, aligning with Duolingo's mission of making language learning fun and accessible. Duolingo's taco shop brought in $700,000 in 2023.[162]
Duolingo is headquartered in Pittsburgh, Pennsylvania, and has offices inSeattle,New York,[163]Detroit,[164]Beijing, andBerlin.[165]
In 2024, Duolingo opened a new office in New York City, featuring an art gallery in which the company's characters are depicted in the style of famous historical paintings. The gallery showcases moving images of Duo and other characters in a range of artistic styles.[166][167]
Duolingo employs around 830 people.[168][169]
|
https://en.wikipedia.org/wiki/Duolingo
|
Intelligent Computer Assisted Language Learning (ICALL), orIntelligent Computer Assisted Language Instruction (ICALI), involves the application of computing technologies to the teaching and learning of second or foreign languages.[1][2]ICALL combinesArtificial intelligencewithComputer Assisted Language Learning(CALL) systems to provide software that interacts intelligently with students, responding flexibly and dynamically to student's learning progress.[2][3][4]
Natural language processing(NLP) andIntelligent tutoring systems(ITS) are prominent computing technologies in artificial intelligence that inform and influence ICALL.[5][6]Other computing technologies applied to ICALL includeKnowledge representation(KP),Automatic Speech Recognition(ASR),Neural networks,User modelling, andExpert systems. In relation to language learning, ICALL utilizeslinguistic theoryandtheories of second-language acquisitionin its pedagogy.[5][6]
ICALL developed from the field ofComputer Assisted Language Learning(CALL) in the late 1970s[1]and early 1980s.[5]ICALL is a smaller field, and not yet fully formed.
Following the pattern of most language learning technologies,Englishis a prominent language featured in ICALL technology.[7]ICALL programs have also been developed in languages such asGerman,[8]Japanese,[8]Portuguese,[8]Mandarin Chinese,[9]andArabic.[7]ICALL systems are also contributing to the learning of languages that are not as accessible to learn (due to a lesser amount of language resources), or less commonly learned languages, such asCree.[3]
Intelligent CALL is sometimes called parser-based CALL, due to the heavy reliance that ICALL has onparsing.[5]An example of the function of parsing in an ICALL software is a parser detecting errors in the syntax and morphology of sentences freely generated by student users. After using parsing to find any errors, ICALL can providecorrective feedbackto students.[5]Parsingis considered a task of natural language processing.
The ability for students to receive feedback on random, uniquely produced sentences places ICALL in a more engaging teacher role. If students are struggling in certain areas, some ICALL systems will invent new sentences or questions in those areas, giving students more practice.[5]Basically, ICALL is meant to intelligently adapt to student learning needs as a student progresses; this often means (partially or wholly) fulfilling a tutor or teacher role.[8][10]Programs that attempt to fulfill this role are categorized as tutorial ICALL.[1]
Non-tutorial ICALL systems include various language tools anddialogue systems,[1]such as a digitalinterlocutor.[2]Programs for automatically evaluating student-written essays have also been invented,[5]such as theE-rater.[11]
ICALL technology still has many issues and limitations, due to the recency of artificial intelligence being integrated into CALL systems, and the complexity of this enormous task.[1]Artificially intelligent educational software should do its best to encompass the linguistic knowledge and pedagogy of a language teacher in order to resolve these issues.[10]This includes tracking student learning, giving feedback, creating new challenging material in response to student needs, understanding effective teaching strategies, and detecting linguistic errors (grammar, spelling, semantics, morphology, and so on).[5][10]
Additionally, ICALL systems take a long time to develop, and developers must consult professionals in many disciplines.[10]Programming ICALL software is a necessarily multi-disciplinary project.[8]
Further research and development in ICALL will benefit the fields ofapplied linguistics,computational linguistics,artificial intelligence,educational technology, to name a few. ICALL will also expand current knowledge aboutsecond language acquisition.[5]Despite its limitations, ICALL is a worthwhile field, especially as technology progresses.[8]
|
https://en.wikipedia.org/wiki/Intelligent_computer-assisted_language_instruction
|
Language teaching, like other educational activities, may employ specializedvocabularyandword use. This list is aglossaryforEnglish language learning and teachingusing thecommunicative approach.
|
https://en.wikipedia.org/wiki/Glossary_of_language_teaching_terms_and_ideas
|
Language educationrefers to the processes and practices of teaching asecondorforeign language. Its study reflectsinterdisciplinaryapproaches, usually including someapplied linguistics.[1][2]There are four main learning categories for language education: communicative competencies, proficiencies,cross-cultural experiences, and multiple literacies.[3]
Increasingglobalizationhas created a great need for people in the workforce who can communicate in multiple languages. Common languages are used in areas such as trade, tourism, diplomacy, technology, media, translation, interpretation and science. Many countries such asKorea(Kim Yeong-seo, 2009),Japan(Kubota, 1998) andChina(Kirkpatrick & Zhichang, 2002) frame education policies to teach at least one foreign language at the primary and secondary school levels. Further, the governments of some countries more than one official language; such countries includeIndia,Singapore,Malaysia,Pakistan, and thePhilippines. According toGAO(2010), China has recently been putting importance on foreign language learning, especiallyEnglish.
Ancient learners seem to have started by reading, memorizing and reciting little stories and dialogues that provided basic vocabulary and grammar in naturalistic contexts. These texts seem to have emphasized coherent texts rather than isolated sentences such as those modern learners often practice on. They covered topics such as getting dressed in the morning (and how to manage the slaves who helped with that task), going to school (and evading punishment for not having been there yesterday), visiting a sick friend (and how to find an individual unit in a Roman apartment block), trading insults (and how to concede a fight graciously), or getting a new job (a piece of cake if you have studied with me, an ancient teacher assured his students mendaciously). The texts were presented bilingually in two narrow columns, the language you were learning on the left and the one you already knew on the right, with the columns matching line for line: Each line was effectively a glossary, while each column was a text.[4]
Although the need to learn foreign languages is almost as old as human history itself, the origins of modern language education are in the study and teaching of Latin in the 17th century. In the Ancient Near East,Akkadianwas the language of diplomacy, as in theAmarna letters.[5]For many centuries,Latinhad been the dominant language of education, commerce, religion, and government in much of the Western world. By the end of the 16th century, it had largely been displaced by French, Italian, and English.John Amos Comeniuswas one of many people who tried to reverse this trend. He composed a complete course for learning Latin, covering the entire school curriculum, culminating in hisOpera Didactica Omnia, 1657.
In this work, Comenius also outlined his theory oflanguage acquisition. He is one of the first theorists to write systematically about how languages are learned and aboutpedagogical methodology for language acquisition.[6]He held that language acquisition must be allied with sensation and experience. Teaching must be oral. The schoolroom should have models of things, and failing that, pictures of them. As a result, he also published the world's first illustrated children's book,Orbis sensualium pictus. The study of Latin diminished from the study of a living language to be used in the real world to a subject in the school curriculum. Such decline brought about a new justification for its study. It was then claimed that the study of Latin developed intellectual ability, and the study of Latin grammar became an end in and of itself.
"Grammar schools" from the 16th to 18th centuries focused on teaching the grammatical aspects of Classical Latin. Advanced students continued grammar study with the addition of rhetoric.[7]
The study of modern languages did not become part of the curriculum of European schools until the 18th century. Based on the purely academic study of Latin, students of modern languages did much of the same exercises, studying grammatical rules and translating abstract sentences. Oral work was minimal, and students were instead required to memorize grammatical rules and apply these to decode written texts in the target language. This tradition-inspired method became known as thegrammar-translation method.[7]
Innovation in foreign language teaching began in the 19th century and became very rapid in the 20th century. It led to a number of different and sometimes conflicting methods, each claiming to be a major improvement over the previous or contemporary methods. The earliest applied linguists includedJean Manesca,Heinrich Gottfried Ollendorff(1803–1865),Henry Sweet(1845–1912),Otto Jespersen(1860–1943), andHarold Palmer(1877–1949). They worked on setting language teaching principles and approaches based on linguistic and psychological theories, but they left many practical details for others to develop.[7]
The history of foreign-language education in the 20th century and the methods of teaching (such as those related below) might appear to be a history of failure. Very few students in U.S. universities who major in a foreign language attain "minimum professional proficiency." Even the "reading knowledge" required for a PhD degree is comparable only to what second-year language students read, and only very few researchers who are native English speakers can read and assess information written in languages other than English.[8]
However, anecdotal evidence for successful second or foreign language learning is easy to find, leading to a discrepancy between these cases and the failure of many language education programs. This tends to make the research ofsecond-language acquisitionemotionally charged. Older methods and approaches such as thegrammar translation methodand thedirect methodmay be dismissed and even ridiculed, as newer methods and approaches are invented and promoted as solutions to the problem of the high failure rates of foreign language students.
Some books on language teaching describe various methods that have been used in the past and end with the author's new method. These methods may reflect the author's views, and such presentations may de-emphasize relations between old and new methods. For example, descriptive linguists[who?]seem to claim that there were no scientifically-based language teaching methods before their work (which led to theaudio-lingual methoddeveloped for the U.S. Army in World War II). However, there is significant evidence to the contrary.
Authors may also state that older methods were completely ineffective or have died out, though in reality, even the oldest methods are still in use (e.g., theBerlitzversion of the direct method). Proponents of new methods have been so sure that their ideas are so new and so correct that they could not conceive that the older ones have enough validity to cause controversy. This was in turn caused by emphasis on new scientific advances, which has tended to blind researchers to precedents in older work.[8]: 5
There have been two major branches in the field of language learning, the empirical and the theoretical. These have critically separate histories, with each gaining prominence at one time or another. The rivalry between the two camps is intense, with little communication or cooperation between them.[8]
Examples of scholars on the empiricist side includeJesperson,Palmer, andLeonard Bloomfield, who promoted mimicry and memorization with pattern drills. These methods follow from the basic empiricist position that language acquisition results from habits formed by conditioning and repetition. In its most extreme form, language learning is seen as like any learning in any species, human language being essentially the same as communication behaviors seen in other species. Examples of scholars on the theoretical side includeFrancois Gouin,M.D. Berlitz, andEmile B. De Sauzé, whose rationalist theories of language acquisition dovetail with linguistic work done byNoam Chomskyand others. These theories led to a wider variety of teaching methods, ranging from the grammar-translation method and Gouin's "series method" to the direct methods of Berlitz and De Sauzé. Using these methods, students generate original and meaningful sentences to gain a functional knowledge of the rules of grammar. These methods follow from the rationalist position that man is born to think, that language use is a uniquely human characteristic, and that it reflects an innately specifieduniversal grammar. An associated idea that relates to language education is the fact that human languages share many traits. Another is the fact that language learners can create sentences that they have not heard before, and that these 'new' sentences can still be immediately understood by anyone who understands the specific language being produced.
Over time, language education has developed in schools and has become a part of the education curriculum around the world. In some countries, such as the United States, language education (also referred to as World Languages) has become a core subject along with main subjects such as English, Maths and Science.[9]
In some countries, such as Australia, it is so common nowadays for a foreign language to be taught in schools that the subject of language education is referred to asLOTEor Language Other Than English. In most English-speaking education centers, French, Spanish, and German are the most popular languages to study and learn. English as a Second Language (ESL) is also available for students whose first language is not English and who are unable to speak it to the required standard.[citation needed]
Language education may take place as a general school subject or in a specializedlanguage school. There are many methods of teaching languages. Some have fallen into relative obscurity and others are widely used; still others have a small following, but offer useful insights.[citation needed]
While sometimes used interchangeably, the terms "approach", "method" and "technique" are hierarchical concepts.
Anapproachis a set of assumptions about the nature of language and language learning. It does not involve procedure or provide any details about how such assumptions should be implemented in the classroom setting. Such can be related tosecond-language acquisition theory.
There are three principal approaches:
Amethodis a plan for presenting the language material to be learned. It should be based on a selected approach. In order for an approach to be translated into a method, an instructional system must be designed considering the objectives of the teaching/learning situation, how the content is to be selected and organized, the types of tasks to be performed, the roles of students, and the roles of teachers.
Atechnique(or strategy) is a very specific, concrete stratagem or mechanism designed to accomplish an immediate objective. Such are derived from the controlling method, and less directly, from the approach.[7]
As well as the three-tiered view above, an additional lens is that of humanistic language teaching where a cluster of beliefs, attitudes and core concepts fromhumanistic psychologyinforms the approach, method and techniques employed.Earl StevickandGertrude Moskowitz, often regarded as humanistic language teacher educators, considered participation and student-centeredness as central to doing language teaching and being a language teacher. Humanistic language teaching has been strongly associated with many of the interactive methods listed above.[10]
Hundreds of languages are available for self-study, from scores of publishers, for a range of costs, using a variety of methods.[11]The course itself acts as a teacher and has to choose a methodology, just as classroom teachers do.
Audio recordings use native speakers, and one strength is helping learners improve their accent.[12]Some recordings have pauses for the learner to speak. Others are continuous so the learner speaks along with the recorded voice, similar to learning a song.[13]
Audio recordings for self-study use many of the methods used in classroom teaching, and have been produced on records, tapes, CDs, DVDs, and websites.
Most audio recordings teachcontent wordsin the target language by using explanations in the learner's own language. An alternative is to use sound effects to show meaning of words in the target language.[14][15]The only language in such recordings is the target language, and they are comprehensible regardless of the learner's native language.
Language books have been published for centuries, teaching vocabulary and grammar and relevant cultural information. The simplest books are phrasebooks to give useful short phrases for travelers, cooks, receptionists,[16]or others who need specific vocabulary. More complete books include more vocabulary, grammar, exercises, translation, and writing practice.
Also, various other "language learning tools" have entered the market in recent years.
Software can interact with learners in ways that books and audio cannot:
Websites provide various services geared toward language education. Some sites are designed specifically for learning languages:
Many other websites are helpful for learning languages, even though they are designed, maintained, and marketed for other purposes:
Some Internet content is free, often from government and nonprofit sites such asBBC Online, Book2,Foreign Service Institute, with no or minimal ads. Some are ad-supported, such as newspapers and YouTube. Some require a payment.
Language learning strategieshave attracted increasing focus as a way of understanding the process of language acquisition.
Clearly listening is used to learn, but not all language learners use it consciously. Listening to understand is one level of listening but focused listening[24]is not something that most learners use as a strategy. Focused listening is a strategy in listening that helps students listen attentively with no distractions. Focused listening is very important when learning a foreign language as the slightest accent on a word can change the meaning completely.
Many people read to understand but the strategy of reading text to learn grammar and discourse styles can also be used.[25]Parallel textsmay be used to improve comprehension.
Alongside listening and reading exercises, practicing conversation skills can also improvelanguage acquisition. Learners can gain experience in speaking foreign languages through in-person language classes, language meet-ups, universitylanguage exchangeprograms, online language learning communities, and traveling to a country where the language is spoken.
Translation and rote memorization have been the two strategies that have been used traditionally. There are other strategies that also can be used such as guessing, based on looking for contextual clues,spaced repetitionwith a use of various apps, games and tools (e.g.DuolingoandAnki). Knowledge about how the brain works can be utilized in creating strategies for how to remember words.[26]
Esperantois aconstructed languagecreated in 1887 byL. L. Zamenhof, a Polish-Jewish ophthalmologist who wanted to eliminate language barriers in international communication. Esperanto is based onIndo-European languagesand has a highly regular grammar and writing system. It has been proposed that learning Esperanto can provide apropaedeuticeffect for foreign language study. That is, studying Esperanto for one year and then studying another language afterward may result in greater proficiency in the long run than studying the target language only.[27][28][29][30][31][32]However, some of the findings from these studies are compromised by unclear objectives, brief or anecdotal reporting, and a lack of methodological rigor.[33]
Blended learning combines face-to-face teaching withdistance education, frequently electronic, either computer-based or web-based.
The four basic language skills are listening, speaking, reading, and writing. However, other, more socially-based skills have been identified more recently. Examples include summarizing, describing, and narrating. In addition, more general learning skills such as study skills and knowing one's own best learning style have been applied to language classrooms.[34]
In the 1970s and 1980s, these four basic skills were generally taught in isolation in a very rigid order, such as listening before speaking. But since then, it has been recognized that people generally use more than one language skill at a time, leading to more integrated exercises.[34]Speaking is a skill that often is underrepresented in the traditional classroom. This is due to the fact that it is considered harder to teach and test. There are numerous texts on teaching and testing writing but relatively few on speaking.
More recent textbooks stress the importance of students working with other students in pairs and groups, sometimes the entire class. Pair and group work give opportunities for more students to participate more actively. However, supervision of pairs and groups is important to make sure everyone participates as equally as possible. Such activities also provide opportunities for peer teaching, where weaker learners can find support from stronger classmates.[34]
In foreign language teaching, the sandwich technique is the oral insertion of anidiomatic translationin themother tonguebetween an unknown phrase in thelearned languageand its repetition, in order to convey meaning as rapidly and completely as possible. The mother tongue equivalent can be given almost as an aside, with a slight break in the flow of speech to mark it as an intruder.
When modeling a dialogue sentence for students to repeat, the teacher not only gives an oral mother tongue equivalent for unknown words or phrases, but repeats the foreign language phrase before students imitate it: L2→{\displaystyle \rightarrow }L1→{\displaystyle \rightarrow }L2. For example, aGermanteacher ofEnglishmight engage in the following exchange with the students:
Mother tongue mirroring is the adaptation of the time-honored technique ofliteral translationor word-for-word translation for pedagogical purposes. The aim is to make foreign constructions salient and transparent to learners while avoiding the technical jargon of grammatical analysis. It differs fromliteral translationandinterlinear textsince it takes the progress that learners have made into account and only focuses on a specific structure at a time. As a didactic device, it can only be used to the extent that it remains intelligible to the learner, unless it is combined with a normal idiomatic translation.
Back-chaining is a technique used in teaching oral language skills, especially withpolysyllabicor difficult words.[35]The teacher pronounces the last syllable, the student repeats, and then the teacher continues, working backwards from the end of the word to the beginning.[36]
For example, to teach the nameMussorgsky, a teacher will pronounce the last syllable:-sky,and have the student repeat it. Then the teacher will repeat it with-sorg-attached before:-sorg-sky,and all that remains is the first syllable:Mus-sorg-sky.
Code switching occurs when a language user alternates two or more languages according to different time, places, contents, objects, and other factors. For example, code switching may occur in a multilingual family or in an immigrant family.[37]That is to say, the capability of using code switching, relating to the transformation of phonetics, words, language structure, expression mode, thinking mode, cultural differences and so on, is needed to be guided and developed in the daily communication environment. Most people learn foreign languages in the circumstance filled with the using of their native language so that their ability of code switching cannot be stimulated, and thus the efficiency of foreign language acquisition would decrease. Therefore, as a teaching strategy, code switching is used to help students better gain conceptual competences and to provide rich semantic context for them to understand some specific vocabularies.[38]
Practices in language education may vary by region however the underlying understandings which drive it are fundamentally similar. Rote repetition, drilling, memorization and grammar conjugating are used the world over. Sometimes there are different preferencesteaching methodsby region.Language immersionis popular in some European countries, but is not used very much in theUnited States, inAsiaor inAustralia.
Early childhood is a critical time for the mastery of language. Hearing infants know some phonological elements of the languages around them at birth.[39]By six months, infants recognize concrete words for things like food and body parts.[40]By two years, children produce sentences that are grammatically similar to those of adults, including in the types of errors that they make.[41]While young children's language is largely acquired naturally by living in a verbal communication environment, they can also benefit from more formal language education.[42][43]
For many people, compulsory education is when they have access to a second or foreign language for the first time. In this period, the most professional foreign language education and academic atmosphere are provided to the students. They can get help and motivation from teachers and be activated by their peers. One would be able to undergo a lot of specialized learning in order to truly master a great number of rules of vocabulary, grammar and verbal communication.
Learning a foreign language during adulthood means one is pursuing a higher value of themself by obtaining a new skill. At this stage, individuals have already developed the ability to supervise themself learning a language. However, at the same time, the pressure is also an obstacle for adults.
Compared to other life stages, this period is the hardest to learn a new language due to gradual brain deterioration and memory loss. Notwithstanding its difficulty, language education for seniors can slow this brain degeneration and active ageing.[44]
An increasing number of people are now combiningholidayswith language study in the native country. This enables the student to experience the target culture by meeting local people. Such a holiday often combines formal lessons, cultural excursions, leisure activities, and ahomestay, perhaps with time to travel in the country afterwards. Language study holidays are popular across Europe (Malta & UK being the most popular) and Asia due to the ease of transportation and variety of nearby countries. These holidays have become increasingly more popular in Central and South America in such countries as Guatemala, Ecuador andPeru. As a consequence of this increasing popularity, several international language education agencies have flourished in recent years.[45]Though education systems around the world invest enormous sums of money into language teaching the outcomes in terms of getting students to actually speak the language(s) they are learning outside the classroom are often unclear.[46]
With the increasing prevalence of international business transactions, it is now important to have multiple languages at one's disposal. Nine out of ten U.S. employers report a reliance on U.S.-based employees with language skills other than English, with one-third (32%) reporting a high dependency.[47]
The principal policy arguments in favor of promoting minority language education are the need for multilingual workforces, intellectual and cultural benefits and greater inclusion in global information society.[48]Access to education in a minority language is also seen as a human right as granted by theEuropean Convention on Human Rights and Fundamental Freedoms, theEuropean Charter for Regional or Minority Languagesand theUN Human Rights Committee.[49][50]Bilingual Education has been implemented in many countries including the United States, in order to promote both the use and appreciation of the minority language, as well as the majority language concerned.[51]
Suitable resources for teaching and learning minority languages can be difficult to find and access, which has led to calls for the increased development of materials for minority language teaching. The internet offers opportunities to access a wider range of texts, audios and videos.[52]Languagelearning 2.0(the use of web 2.0 tools for language education)[53]offers opportunities for material development for lesser-taught languages and to bring together geographically dispersed teachers and learners.[54]
|
https://en.wikipedia.org/wiki/Language_education
|
ALanguage exchangeis a relationship between two or more people who have interactions around the exchange of language.[1]People typically join into a language exchange to gain practice in a target language. Other reasons for joining might include cultural exchange or companionship.[2]Partners of a language exchange are usually native speakers of each other’s target language. Meetings between language exchange partners can be held in person or via videoconferencing platforms. Potential challenges of language exchanges can involve differing motivations, cultural miscommunications or scheduling conflicts. Language exchanges are sometimes calledTandem language learning.[3]
In modern contexts, a language exchange most often refers to the mutual teaching of partners'first languages. Language exchanges are generally considered helpful for developinglanguage proficiency, especially in speaking fluency and listening comprehension. Language exchanges that take place through writing or text chats also improve reading comprehension and writing ability. The aim of language exchange is to develop and increase language knowledge and intercultural skills.[4]This is usually done through social interaction with the native speaker.[4]Given that language exchanges generally take place between native speakers of different languages, they may also improve participants' cross-cultural communication skills.
This practice has long been used by individuals to exchange knowledge of foreign languages. For example,John MiltongaveRoger Williamsan opportunity to practiseHebrew,Greek,Latin, andFrench, while receiving lessons inDutchin exchange.[5]Language exchange first came about in the early 1800s where school aged children in England were introduced to the newly set up program.[6]Countries such as Belgium and Switzerland found the language exchange program very easy to run as there were many languages spoken in the one country.[6]French and German youth picked up language exchange in 1968 which then spread to Turkey and Madrid.
American universities are increasingly experimenting with language exchanges as part of the language learning curriculum.[7]In this respect, language exchanges have a similar role asstudy abroadprograms andlanguage immersionprograms in creating an environment where the language student must use the foreign language for genuine communication outside of a classroom setting. In such programs, international and American students can be paired up with one another so they may then freely organize meetings that permit opportunities for communication and intercultural exchange.[8]In other examples of university language exchange programs students may join for practices like language tutoring, conversation groups, or social gatherings.[9]
Most language exchanges are set up through language learning websites and applications with platforms that accommodate the search and selection of potential language partners. Many of these networks offer the opportunity for language partner selections based not only on target language, but also country of origin, gender, age, and language proficiency level of a potential partner.[10]Examples of these includeHelloTalk,Tandem, andConversation Exchange.
Language learning social networks offer language students the opportunity to find language partners from around the world. Many such platforms allow language exchange partners to text, as well as speak to one another through voice or video calls. Partners may also decide to communicate viainstant messengers,voice-over-IP technologies, or other telecommunications platforms. Location and means permitting, connected partners may also later elect to meet in person.
Advances in language learning social networks have provided an outlet for foreign language students who previously had difficulty locating opportunities to practice their target language. Language exchange platforms often offer a wealth of eligible partners, with some boasting as many as several million users.[11]The diversity among the countries of origin for potential partners can mean the opportunity to experience a myriad of linguistic and cultural exchanges.[12]
Language exchanges have been viewed as a helpful tool to aid language learning at learning institutions and among individual learners. The benefit of most language exchanges is that they are often performed between native speakers. Practice with native speakers can not only provide more robust opportunities for feedback regarding linguistic elements such as pronunciation, grammar, and vocabulary, but also authentic listening practice.[13]
Another major benefit of language exchange is the exposure to the native speaker's culture.[14]Not only does learning about the culture of locations where one’s target language is spoken enhance their overall linguistic abilities, it can also serve to broaden their intercultural communication skills.[15]
Language exchanges can provide a friendly and informal environment for new language learners. Both speakers are trying to learn and understand, and such an atmosphere can reduce pressure on either partner.[16]This also gives the learning environment a fun and productive atmosphere.
An additional benefit is that people are learning faster when they have a one-on-one connection with the "teacher".
Many people choose to learn one-on-one but struggle try to find a teacher. People like this are highly motivated to learn a new language. The native speakers who are helping these people may feel a new sense of motivation since they are now responsible for teaching this person.[6][14]
Because both partners of a language exchange are generally seeking help with their language skills, usually neither partner compensates the other for the assistance they receive. A setup whereby only one partner provides help or is compensated for their services would typically not be referred to as a language exchange.
Online relationships can give rise to many of the same complications that may exist in real-life relationships. Sometimes remote language partners can have different motivations for joining into a language exchange. It can be disappointing when a partner’s goals for the relationship conflict with one's own; such disagreement of purpose can lead to an end of a language partnership.[17]
Personality mismatches can be as prevalent in online relationships as they can be in offline ones. Unresolved incompatibility issues can be even more difficult to overcome in remote relationships, however, and related factors can cause one or more participants of a language exchange to either gradually or abruptly withdraw from the relationship.[18]
Miscommunications can occur in any type of relationship, but they can be even more common between people from different cultures. Those who either anticipate or are otherwise prepared to deal with such misunderstandings may be better equipped for navigating online relationships with people of other cultures.[19]
Scheduling difficulties can exist between language partners from different regions throughout the world. Meetings between people located in different time zones can be an inconvenient fact of some language exchanges. In such cases, partners may need to compromise to select a meeting time which is not too disruptive to either person’s schedule.[20]
|
https://en.wikipedia.org/wiki/Language_exchange
|
Language immersion, or simplyimmersion, is a technique used inbilingual language educationin which two languages are used for instruction in a variety of topics, including maths, science, or social studies. The languages used for instruction are referred to as the L1 and the L2 for each student, with L1 being the student'snative languageand L2 being thesecond languageto be acquired through immersion programs and techniques. There are different types of language immersion that depend on the age of the students, the classtime spent in L2, the subjects that are taught, and the level of participation by the speakers of L1.
Although programs differ by country and context, most language immersion programs have the overall goal of promotingbilingualismbetween the two different sets of language-speakers. In many cases,biculturalismis also a goal for speakers of the majority language (the language spoken by the majority of the surrounding population) and theminority language(the language that is not the majority language). Research has shown that such forms of bilingual education provide students with overall greater language comprehension and production of the L2 in a native-like manner, especially greater exposure to other cultures and the preservation of languages, particularlyheritage languages.
Bilingual education has taken on a variety of different approaches outside the traditionalsink-or-swimmodel of full submersion in an L2 without assistance in the L1. According to theCenter for Applied Linguistics(CAL), in 1971, there were only three immersion programs within the United States. As of 2011, there were 448 language immersion schools in the US, with the three main immersion languages of instruction beingSpanish(45%),French(22%), andMandarin(13%).[1]
The first French-language immersion program in Canada, with the target language being taught as an instructional language, started in Quebec in 1965.[2]Since the majority language in Quebec is French, English-speaking parents wanted to ensure that their children could achieve a high level of French as well as English in Quebec. Since then, French immersion has spread across the country and has led to the situation of French immersion becoming the most common form of language immersion in Canada so far. According to the survey by CAL in 2011, there are over 528 immersion schools in the US. Besides, language immersion programs have spread to Australia, Mainland China, Saudi Arabia, Japan and Hong Kong, which altogether offer more than 20 languages. The survey also showed that Spanish is the most common immersion language in language immersion programs in US. There are over 239 Spanish-language immersion programs in the US because of immigration from Spanish-speaking countries. The other two common immersion language programs in the US are French and Mandarin, which have 114 and 71 language immersion programs, respectively.[3]
Types of language immersion can be characterized by the total time students spend in the program and also by the students' age.
Types that are characterized by learning time:
Types that are characterized by age:
The stages of immersion can also be divided into:
People may also relocate temporarily to receive language immersion, which occurs when they move to a place (within their native country or abroad) where their native language is not the majority language of that community. For example, Canadian anglophones go to Quebec (seeExploreandKatimavik), and Irish anglophones go to theGaeltacht. Often, that involves ahomestaywith a family that speaks only the target language. Children whose parents emigrate to a new country also find themselves in an immersion environment with respect to their new language. Another method is to create a temporary environment in which the target language predominates, as in linguisticsummer campslike the "English villages" in South Korea and parts of Europe.
Study abroad can also provide a strong immersion environment to increase language skills. However, many factors may affect immersion during study abroad, including the amount of foreign-language contact during the program.[12]To impact competence in the target language positively, Celeste Kinginger notes, research about language learning during study abroad suggests "a need for language learners' broader engagement in local communicative practices, for mindfulness of their situation as peripheral participants, and for more nuanced awareness of language itself.”[13]
The task of organizing and creating such a program can be daunting and problematic, with everything from planning to district budget posing issues. One method of implementation proposed by the Center for Advanced Research on Language Acquisition is a phase-in method, which starts with the lowest year participating in the program as the only year and adds a new grade of students into the program each year, working up towards high school.[14]This slow incorporation of an immersion program is useful for schools with limited funding and those who are skeptical about the benefits of such a program because it allows for yearly evaluation and, if it were to fail from the beginning, the impact of the loss is less significant.
The method of implementation is crucial to the success of the program, as the RAND Institute has concluded that the final result of these programs is positive, but only so long as implemented correctly, meaning consistency and strict adherence to the curriculum in the classroom.[15]
Studies have shown that students who study a foreign language in school, especially those who start in elementary school, tend to receive higher standardized test scores than students who have not studied a foreign language in school.[18]According to additional research, learning another language can also help students do better in math, focusing, and remembering.[19]Students who study foreign languages also tend to have increased mental capabilities, such as creativity and higher-order thinking skills (seecognitive advantages of bilingualism) and have advantages in the workplace, such as higher salary and a wider range of opportunities, since employers are increasingly seeking workers with knowledge of different languages and cultures.[20]Bilingual immersion programs are intended to foster proficiency or fluency in multiple languages and therefore maximize these benefits. Even if fluency in the desired language is not fully attained, bilingual immersion programs provide a strong foundation for fluency later in life and help students gain appreciation of languages and cultures other than their own.[21]
There are no long-term adverse effects of bilingual education on the learning of the majority language, regardless of whether the students' first language (L1) is a majority or a minority language or of the organization of the educational program. Several observed outcomes of bilingual education are the transfer of academic and conceptual knowledge across both languages, greater success in programs that emphasize biliteracy as well as bilingualism, and better developed second-language (L2) literary skills for minority students than if they received a monolingual education in the majority language.[22]
Language immersion programs with the goal of fostering bilingualism, Canada's French-English bilingual immersion program being one of the first, initially reported that students receive standardized test scores that are slightly below average. That was true in Canada's program, but by Grade 5, there was no difference between their scores and the scores of students who were instructed only in English. The English spelling abilities soon matched those of the English-only students. Ultimately, students did not lose any proficiency in English and were able to develop native-like proficiency in French reading and comprehension but they did not quite reach native-like proficiency in spoken and written French. However, the immersion program is seen as providing a strong foundation for oral French fluency later in life,[10]and other similar programs that might not fully reach their projected goals may also be seen in the same light.
Programs with the goal of preserving heritage languages, such as Hawaii's language immersion program, have also reported initial outcomes of below-average test scores on standardized tests. However, the low test scores may not have been caused by purely language-related factors. For example, there was initially a lack of curriculum material written in Hawaiian, and many of the teachers were inexperienced or unaccustomed to teaching in Hawaiian. Despite the initial drawbacks, the Hawaiian program was overall successful in preserving Hawaiian as a heritage language, with students in the program being able to speak Hawaiian fluently while they learned reading, writing, and math, which were taught in Hawaiian.[23]
Partial immersion programs do not have the initial lag in achievement of the programs of Canada and Hawaii but are less effective than full immersion programs, and students generally do not achieve native-like L2 proficiency.[24]
The first issue is the allocation of time given to each language. Educators have thought that more exposure to the students' L2 will lead to greater L2 proficiency,[25]but it is difficult for students to learn abstract and complex concepts only by L2. Different types of language immersion schools allocate different time to each language, but there is still no evidence to prove that any particular way is best.[26]
In the United States, state and local government only provide curriculum for teaching students in only one language. There is no standard curriculum for language-immersion schools.[27]
Besides, the states do not provide assistance in how to promote biliteracy. Bilingual teaching has been too little researched. The report of the Council of the Great City Schools in 2013 has shown that half of the city schools lack professional bilingual teaching instructors.[28]
There are challenges to developing high proficiency in two languages or balance in bilingual skills, especially for early immersion students. Children complete the development of their first language by the age 7, and L1 and L2 affect each other during language development.[29]High levels of bilingual proficiency are hard to achieve. Students with more exposure are better. For second-language immersion schools, immersion too early in a second language leads students to fail to be proficient in their first language.
As of 2009, about 300,000 Canadian students (roughly 6% of the school population) were enrolled in immersion programs. In early immersion, L1 English-speakers are immersed in French in their education for 2 to 3 years prior to formal English education. This early exposure prepares Canadian L1 English speakers for the 4th grade, when they begin to be instructed in English 50% of the time and French the other 50%.[10]
In the United States and since the 1980s, dual immersion programs have grown for a number of reasons: competition in a global economy, a growing population of second-language learners, and the successes of previous programs.[30]Language immersion classes can now be found throughout the US, in urban and suburban areas, in dual-immersion and single-language immersion, and in an array of languages. As of May 2005, there were 317 dual immersion programs in US elementary schools, providing instruction in 10 languages, and 96% of those programs were in Spanish.[31]
The 1970s marked the beginning of bilingual education programs in Hawaii. The Hawaiian Language Program was geared to promote cultural integrity by emphasizing native-language proficiency through heritage language bilingual immersion instruction. By 1995, there were 756 students enrolled in the Hawaiian Language Immersion Program from K to 8. The program was taught strictly in Hawaiian until Grades 5 and 6, when English was introduced as the language of instruction for one hour per day. The Hawaiian Language immersion Program is still in effect today for K-12. With an emphasis on language revival, Hawaiian is the main medium of instruction until Grade 5, when English is introduced but does not usurp Hawaiian as the main medium of instruction.[23]
A study by Hamel (1995) highlights a school in Michoacan, Mexico, which focuses on two bilingual elementary schools in which teachers built a curriculum that taught all subjects, including literature and math, in the children’s L1: P’urhepecha. Years after the curriculum was implemented in 1995, researchers conducted a study comparing L1 P’urhepecha students with L1 Spanish students. Results found that students who had acquired L1 P’urhepecha literacy performed better in both languages (P’urhepecha and Spanish) than students who were L1 Spanish literate.[10]
New Zealand shows another instance of heritage bilingual immersion programs. Established in 1982, full Māori-language immersion education strictly forbids the use of English in classroom instruction even though English is typically the students' L1. That has created challenges for educators because of the lack of tools and underdeveloped bilingual teaching strategy for Māori.[10]
A study by Williams (1996) looked at the effects bilingual education had on two different communities in Malawi and Zambia. In Malawi, Chichewa is the main language of instruction, and English is taught as a separate course. In Zambia, English is the main language of instruction, and the local language, Nyanja, is taught as a separate course. Williams's study took children from six schools in each country in Grade 5. He administered two tests: an English-language reading test, and a mother-tongue reading test. One result showed that there was no significant difference in the English reading ability between the Zambian and Malawian school children. However, there were significant differences in the proficiency of mother tongue reading ability. The results of the study showed that the Malawian students did better in their mother tongue, Chichewa, than Zambian children did in their mother tongue, Nyanja.[10]
|
https://en.wikipedia.org/wiki/Language_immersion
|
Language MOOCs(Language Massive Open Online Courses, orLMOOCs) are web-basedonline coursesfreely accessible for a limited period of time, created for those interested in developing their skills in aforeign language. As Sokolik (2014)[1]states, enrolment is large, free and not restricted to students by age or geographic location. They have to follow the format of a course, i.e., include asyllabusand schedule and offer the guidance of one or several instructors. TheMOOCsare not so new, since courses with such characteristics had been available online for quite a lot of time before Dave Cormier coined the term 'MOOC' in 2008.[2]Furthermore, MOOCs are generally regarded as the natural evolution of OERs (open educational resources), which are freely accessible materials used in Education for teaching, learning and assessment.
Although there seem to be very few examples of LMOOCs offered by MOOC providers, authors, such as Martín-Monje & Barcena (2014),[3]argue that these open online courses can be effectively designed to facilitate the development of communicative language competences in potentially massive and highly heterogeneous groups, whose main shared interest is to learn a foreign language. Scholarly research is equally incipient in the field, with only two monographs published to date on the topic.[3][4]These volumes, considered milestones of the emerging field, are based upon work taken from the well-established discipline of CALL (computer-assisted language learning), which has long proven the suitability of TELL (technology-enhanced language learning).[5][6][7][8][9][10]
The first LMOOCs started to appear in October 2012. Example courses include the three LMOOCs begun by the Spanish National Distance University (UNED). In relation to the English language, we have the LMOOC "Learn the first thousand words" (which had 45,102 students), and "Professional English" (with 33,588 students) and related to the German language, UNED offers the LMOOC "German for Spanish speakers" (with 22,438 students).[11]The British Open University also started their Open Translation MOOC around the same time (they do not use the term "LMOOC" since it did not exist at that time). The course, "SpanishMOOC", integrated social media tools such asSkypeandGoogle Hangoutsin order to enhance synchronous oral interaction.[12][13]
Another early example was Todd Bryant's[14]joint launch of "English MOOC: Open Course for Spanish Speakers learning English" and "MOOC de Español: Curso abierto para hablantes de inglés que deseen mejorar su español" using his exchange websiteThe Mixxerto connect language learners with native speakers for mutual exchanges. There have been some attempts to compile lists of LMOOC providers and available courses,[3][15][16]but it seems like an impossible task to keep abreast of the constant changes in the MOOC panorama. Furthermore, LMOOCs seem to have received recently attention from governmental institutions and there is one European project that specifically focuses on LMOOCs, namely theLangMOOC projectArchived2017-07-25 at theWayback Machine,[17]as well as others, such as theECO project, which include LMOOCs in their catalogue.
In order to be effective, Read (2015)[18]argues that LMOOCs require a set of tools and technologies that are appropriate for students to train the relevant receptive and productive language skills as they would in real world communicative situations. The possibilities for such technological mediation depend on the type of LMOOC proposed. There are several types of courses identified in the literature, but the two most common ones arecMOOCsand xMOOCs.[19][20][21][22][23]The former, inspired by the notions of open education (techniques and resources), do not run on a single platform (but are distributed across many), promote immersion and interaction. The latter usually represent a continuation of other types of e-Learning courses that institutions have undertaken and, therefore, have a similar course structure, following standard face-to-face educational models.
For LMOOCs based upon an xMOOC platform, the resources and tools available for students typically include: textual materials in the form of Web pages, structuredPDFfiles or URLs to content outside the platform, audiovisual recordings (often developed and stored on social video sites such as YouTube or Vimeo), tasks and exercises such as closed multiple-choice tests that are the basic evaluation mechanism, open answers, for example, based upon free writing, which can be compared to model answers or evaluated using peer-to-peer correction and, lastly, forums that represent a key component for learners to interact and practice mediated communication in the target language, providing a valuable mechanism for students to help each other and answers their peers' questions. For LMOOCs based upon the cMOOC model, a range of online Web 2.0 tools can be used so as to enable students to undertake the remixing, repurpose and co-create content and interaction, promoting the community nature of collaborative and social learning. xMOOC activities are typically highly structured and may not, as such, provide students with the communicative opportunity required to use what has just been seen and/or heard in an open and flexible way, including fine-grained feedback of different and complementary types. However, conversely, the often unstructured and constantly changing nature of cMOOCs together with, as Brennan (2014) notes, the cognitive load related to the sheer volume of information, number of tweets, posts, etc., available; the varying degrees of difficulty of activities (with little if any guidance available); and the need to use different tools and platforms, etc., can offer learners additional difficulties. Current research (Sokolik, 2014) attempts to combine the benefits of both types of model.[1]
Castrillo de Larreta-Azelain (2014), being one of the first published papers that focuses on the roles, competences and methodological strategies of teachers in LMOOCs, on the basis ofempirical research, identifies their main roles from a theoretical and practical standpoint.[24]The proposed framework links toSalmon's theoretical tutoring model[25](Salmon, 2003) and is based on Hampel & Stickler's skills pyramid,[26](Hampel & Stickler, 2005) although focusing on Crompton's framework,[27](Crompton, 2009) which includes the three major sets of online language teaching: technology, pedagogy and evaluation.
The author's proposed model for redefining the teacher's role in this area is designed according to the different stages present in a LMOOC. The main task of the teaching teams in LMOOCs is shifted almost completely to the design and elaboration of the course before it actually takes place. It is argued that the instructional design necessary for the course requires a systematic, sequential plan based on Mastery Learning that consists of four steps. Moreover, the application of heuristic strategies to help present and transmit the contents of the course is suggested.
In LMOOCs, teachers need to become curators, facilitators, leaders and administrators, solving problems, suggesting complementary material, moderating forums, motivating students, and overseeing the whole learning experience during the course. Finally, before, during, and after the LMOOC, instructors are also researchers, collectors, and analyzers of learners' data.
As for students, Anderson et al.(2014)[28]identify five different possible roles that MOOC participants can adopt:
"1. Viewers, who primarily watch lectures, handing in few – if any – assignments. 2. Solvers, who mainly hand in assignments for a grade, viewing few – if any – of the audiovisual materials. 3. All-rounders, those who balance the watching of the videos with the handing in of assignments. 4. Collectors, who mostly download lectures, handing in a few assignments. Unlike the viewers, they may or may not be actually watching the lectures. 5. Bystanders, those who registered for the course but may not even log in at all."
MOOCs are examples of the evolution of e-Learning environments towards a more revolutionary computer and mobile-based scenario along with social technologies that will lead to the emergence of new kinds of learning applications[29][30]that enhance communication and collaboration processes.[31]These applications should take advantage of the unique conditions of mobility and the ubiquity of Internet access, exploring successful actions for education. However, the access to MOOC platforms still present barriers, there is also a lack of accessibility on the learning resources, the communicating tools, and even less personalized user interfaces. All these issues present definitively barriers that add extra difficulties, such as the need to develop specific digital or even social skills for students with functional diversity.
Students usingassistive technologiesmay have problems while navigating in the MOOC environment, accessing the platform (registration process), and even using the learning content contained in the platform. A driving force has been precisely the beneficial application of multimedia and audiovisual content in the area of education to favor language learning, the majority of web applications and pages are based on collections of shared visual and audio-visual resources (such asFlickrorYouTube). MOOCs are also full of video-presentations, animations, automatic self-assessment (some of them multimedia-based) integrated into them. This introduction ofaudiovisualcontent into e-Learning platforms adds a new difficulty to theaccessibilityrequirements since they include new elements that widen the digital divide and not only for people with disabilities. How MOOCs are designed, how their interfaces work, how communication is handled, how assessments take place (for instance, the way a student has to record his/her audio for a language speaking recording) and what form the learning content takes, all issues impact on the accessibility of these systems. The challenge for any language learning environment is one of accessibility in terms of the community with whom it wishes to engage, ensuring that processes such as enrolling in a course, navigating the system, accessing learning and assessment materials, and peer interacting are achievable through the use of assistive technologies. Moreover, in accessible language learning there are still some challenges to be faced, namely:
A MOOC interface design is often determined by the platform since some of the features – learning and testing tools – cannot be edited or customized by the academic assistants. Its materials and its mode of delivery might adhere to a set of accessibility standards. The majority of learning activities undertaken continues to take place using some hardware/software that was not designed for its specific use with educational applications and, hence, usability issues often arise. Moreover, there are technical problems or incompatibility, when it is not possible to have the required technology, or it is not possible to obtain materials in alternative formats. Moreover, MOOC environments typically contain a variety of components that do not always share a consistency of interface logic or interactive elements, ranging from posts in a forum, making up elements in tests or timed quizzes through playing embedded videos or downloading a variety of document formats. Video lectures are key elements in the MOOC model, and the hurdles of interacting with the platform or content should be minimized. However, alternative accessible formats, subtitles, and/or sign language interpreters for audiovisual materials, audio-description recordings are not easily available even though there exist great guidelines, such as Sánchez (2013).[32]
The pedagogical and visual design of the MOOCs, their information architecture, usability and visual and interaction design could be having a negative impact on student engagement, retention and completion rates as it has been previously analyzed in adult learning.[33]Whilst designing a service based on MOOCs to be used by people with functional diversity, it is important to consider the accessibility level of each of the parts of the system and also the role of the meta-information related to functional diversity, for instance, to define specific user profiles.
Although the usual accessibility barriers may remain, the model of large scale participation and social accessibility[34]could be used to support special needs users by providing peer assistance in terms of study skills, content adaption[35]and remote assistance. If enough interaction between users exists, students within the system can learn from their fellow students and make a contribution by helping their fellow students. In the end, resources can be media enriched, achieving a greater level of quality: transcriptions for mind mapping, audio recordings for podcasting, etc. All resources grouped together into learning resource collections that will benefit all of the stakeholders and the variety of the ubiquitous processes.
The flexibility of the language learning service offered by MOOCs to learn at any time, place and pace, enhancing continuous communication and interaction between all participants in knowledge and community building, especially benefits this disadvantaged group which can, therefore, improve their level of employability and social inclusion, where language learning plays such an important role.
The MOOC model, while opening up education to a larger audience, also faces difficulties that will have to be overcome before they can replace other approaches to online teaching and learning. Some of these challenges are general to all MOOCs[36][37]and others, as claimed by Barcena et al.(2014),[38]are specific to language courses.[38][39][40]Regarding the former, given that most courses are essentially xMOOCs, then they are intended to provide the same learning experience to all students who undertake them, thereby limiting possibilities for individual instruction andpersonalized learning. A further problem is that of student assessment, how to do it and how to prevent cheating,[41][42]closed tests are typically used but lack the flexibility of open written / oral answers. High dropout rates and the associated lack of participation within the forums also limit the possibilities forcollaborative learning, so necessary for the development of many different competences. Finally, the economic issue of how to cover the expenses of preparing, running and managing a course. The former or specific challenges of a Language MOOC reflect the nature of learning a language as a skill acquisition process as well as one of knowledge assimilation, where the students need to actually use and apply the linguistic structures which they are learning in a realistic setting with quality (near-) native feedback.
Research on language MOOCs, and related technology andmethodology, offer ways to address some of these challenges, motivating students and implicating them more fully in learning activities related to the development of their second language competences.[3]Furthermore, as the nature of society changes, then so to will the way in which online language learning is undertaken. As in other areas of online learning, the role of mobile devices is becoming ever more important here, leading to the notion of mobile-driven or mobile-assisted LMOOCs, or MALMOOCs,[39]where such devices go beyond being just portable course clients to act as mobile sensor-enabled based around extensible app-based devices that can extend language learning into everyday real-world activities. Other emergingeducational technologiesthat will arguably be important for LMOOCs cover areas, such aslearning analytics,gamification,personal learning networks, adaptive and automated assessment.
|
https://en.wikipedia.org/wiki/Language_MOOC
|
Self-studylanguage acquisitionprograms allow learning without having a teacher present,[1][2]and the courses can supplement or replace classroom instruction.[3]Universities use self-study programs for less-commonly taught languages, where having professors is not feasible.[4][5]Self-study programs are available on paper, audio files, video files, smartphone apps, computers, or any combination.[6]
This list is limited to programs that teach four or more languages. There are many others that teach one language.
Alphabetical lists of languages show the courses available to learn each language, at All Language Resources, Lang1234, Martindale's Language Center,Omniglot, and Rüdiger Köppe. (UCLA Language Materials Projecthas ended.) For the thousands of languages not listed on those sites, for which no course exists,Global Recordings Networkhas recorded a standard set of Bible stories in 6,000 languages. With effort, learners can study any language by comparing their recordings to the same story in a language they know.[7]
The list of self-study programs, below, shows the number of languages taught by each program, the name of the program, and the number of different languages used for instruction. Multiple languages of instruction may be available for some but not all courses. For example,Reise Know-Howuses six languages to teach German, but only German to teach the other languages. On the other handEurotalk,Pronunciatorand50Languagesuse all languages to teach all the other languages.
|
https://en.wikipedia.org/wiki/List_of_language_self-study_programs
|
This article contains a list of notableflashcardsoftware. Flashcards are widely used as alearningdrill to aidmemorizationby way ofspaced repetition.
|
https://en.wikipedia.org/wiki/List_of_flashcard_software
|
Anonline learning communityis a public or private destination on theInternetthat addresses its members' learning needs by facilitating peer-to-peer learning. Throughsocial networkingandcomputer-mediated communication, or the use of datagogies while people work as a community to achieve a shared learning objective. The community owner may propose learning objectives or may arise out of discussions between participants that reflect personal interests. In an online learning community, people share knowledge via textual discussion (synchronous or asynchronous), audio, video, or other Internet-supported media.Blogsblend personal journaling with social networking to create environments with opportunities for reflection.[citation needed]
According toEtienne Wenger, onlinelearning communitiesare environments conducive tocommunities of practice.[1]
Types of online learning communities include e-learning communities (groups interact and connect solely via technology) and blended learning communities (groups utilize face-to-face meetings as well as online meetings). Based on Riel and Polin (2004), intentional online learning communities may be categorized as knowledge-based, practice-based, and task-based. Online learning communities may focus on personal aspects, process, or technology. They may use technology and tools in many categories:
|
https://en.wikipedia.org/wiki/Online_learning_community
|
Promova(in English: /prɔˈmɔvʌ/) is alanguage learning platformthat includes amobile app, website, personal and group lessons with tutors, and a conversation club.[1][2][3]Starting in 2024, language courses includeAI learningtools for conversational practice and pronunciation recognition.[4]
Promova was launched in 2019. Before that, the company was known as Ten Words, the app evolved into a comprehensive language-learning platform by 2022 and was rebranded as "Promova."[5]
In 2021, Andrew Skrypnyk, the company's CEO, was awarded30 under 30 by ForbesUkraine.[6][7][8][9]
In May 2023, Promova launched its Korean language course. The version was created by Elly Kim, a linguist with Korean roots living in Ukraine.[10]
On August 24, 2023, the Independence Day of Ukraine, Promova launched a Ukrainian language course, including 48 bite-sized lessons and flashcards with information about Ukrainian culture.[11][12][13]
In October 2023, Promova became the first language-learning platform to release aDyslexiamode, designed to make it easier for people with dyslexia to learn a new language.[14][15][16][17]The mode uses Dysfont, a specialized typeface created by dyslexic designer Martin Pysny.[18][19][20][21][22]
In November 2023, Promova provided all Ukrainians with three years of free access to its language courses as part of Ukraine's Future Perfect national program.[23]This initiative, launched by the Ukrainian government and the Ministry of Digital Transformation, supports President Zelensky's law recognizing English as the official language of international communication and aims to improve English proficiency among Ukrainians.[24][25][26][27][28][29]
In December 2023, Promova was recognized as one of the 25 most prominent Ukrainian startups by Forbes magazine.[30][31]
In April 2024, OnNational ASL Dayin the US, Promova launched a freeAmerican Sign Languagecourse. A part of the course covers communication in emergencies such as asking for help, warning about fire, and expressing the need to call the police or a doctor.[32]
In June 2024, Promova won the 2024 EdTechX Awards in the Language learning category.[33][34]
As of 2025, Promova has 12 language learning courses:[35][36][37][38][39]
|
https://en.wikipedia.org/wiki/Promova
|
Second-language acquisition(SLA), sometimes calledsecond-language learning—otherwise referred to asL2(language 2)acquisition, is the process of learning a language other than one's native language (L1). SLA research examines how learners develop their knowledge ofsecond language, focusing on concepts likeinterlanguage, a transitional linguistic system with its own rules that evolves as learners acquire the target language.
SLA research spans cognitive, social, and linguistic perspectives. Cognitive approaches investigate memory and attention processes; sociocultural theories emphasize the role of social interaction and immersion; and linguistic studies examine the innate and learned aspects of language. Individual factors like age,motivation, andpersonalityalso influence SLA, as seen in discussions on thecritical period hypothesisand learning strategies. In addition to acquisition, SLA explores language loss, orsecond-language attrition, and the impact of formal instruction on learning outcomes.
Second languagerefers to any language learned in addition to a person'sfirst language; although the concept is calledsecond-language acquisition, it can also incorporate the learning ofthird, fourth, or subsequent languages.[1]Second-language acquisition refers to what learners do; it does not refer to practices inlanguage teaching, although teaching can affect acquisition. The termacquisitionwas originally used to emphasize the non-conscious nature of the learning process,[note 1]but in recent yearslearningandacquisitionhave become largely synonymous.
SLA can incorporateheritage language learning,[2]but it does not usually incorporatebilingualism. Most SLA researchers see bilingualism as being the result of learning a language, not the process itself, and see the term as referring to native-like fluency. Writers in fields such as education and psychology, however, often use bilingualism loosely to refer to all forms ofmultilingualism.[3]SLA is also not to be contrasted with the acquisition of aforeign language; rather, the learning of second languages and the learning of foreign languages involve the same fundamental processes in different situations.[4]
The academic discipline of second-language acquisition is a sub-discipline ofapplied linguistics. It is broad-based and relatively new. As well as the various branches oflinguistics, second-language acquisition is also closely related to psychology and education. To separate the academic discipline from the learning process itself, the termssecond-language acquisition research,second-language studies, andsecond-language acquisition studiesare also used.
SLA research began as an interdisciplinary field; because of this, it is difficult to identify a precise starting date.[5]However, two papers in particular are seen as instrumental to the development of the modern study of SLA: Pit Corder's 1967 essayThe Significance of Learners' Errorsand Larry Selinker's 1972 articleInterlanguage.[6]The field saw a great deal of development in the following decades.[5]Since the 1980s, SLA has been studied from a variety of disciplinary perspectives, and theoretical perspectives. In the early 2000s, some research suggested an equivalence between the acquisition of human languages and that of computer languages (e.g. Java) by children in the 5 to 11-year age window, though this has not been widely accepted amongst educators.[7]Significant approaches in the field today aresystemic functional linguistics, sociocultural theory,cognitive linguistics,Noam Chomsky'suniversal grammar,skill acquisition theoryandconnectionism.[6]
There has been much debate about exactly how language is learned and many issues are still unresolved. There are many theories of second-language acquisition, but none are accepted as a complete explanation by all SLA researchers. Due to the interdisciplinary nature of the field of SLA, this is not expected to happen in the foreseeable future. Although attempts have been made to provide a more unified account that tries to bridge first language acquisition and second language learning research.[8]
The time taken to reach a high level of proficiency can vary depending on the language learned. In the case of native English speakers, some estimates were provided by theForeign Service Institute(FSI) of theU.S. Department of State—which compiled approximate learning expectations for several languages for their professional staff (native English speakers who generally already know other languages).[9]Category I Languagesinclude e.g. Italian and Swedish (24 weeks or 600 class hours) and French (30 weeks or 750 class hours).Category II Languagesinclude German, Haitian Creole, Indonesian, Malay, and Swahili (approx. 36 weeks or 900 class hours).Category III Languagesinclude a lot of languages like Finnish, Polish, Russian, Tagalog, Vietnamese, and many others (approx. 44 weeks, 1100 class hours).
Determining a language's difficulty can depend on a few factors like grammar and pronunciation. For instance, Norwegian is one of the easiest languages to learn for English speakers because its vocabulary shares many cognates and has a sentence structure similar to English.[10]
Of the 63 languages analyzed, the five most difficult languages to reach proficiency in speaking and reading, requiring 88 weeks (2200 class hours,Category IV Languages), are Arabic, Cantonese, Mandarin, Japanese, and Korean. The Foreign Service Institute and theNational Virtual Translation Centerboth note that Japanese is typically more difficult to learn than other languages in this group.[11]
There are other rankings of language difficulty as the one byThe British Foreign Office Diplomatic Service Language Centrewhich lists the difficult languages in Class I
(Cantonese, Japanese, Korean, Mandarin); the easier languages are in Class V (e.g. Afrikaans, Bislama, Catalan, French, Spanish, Swedish).[12]
Adults who learn a second language differ from childrenlearning their first languagein at least three ways: children are still developing their brains whereas adults have mature minds, and adults have at least a first language that orients their thinking and speaking. Although some adult second-language learners reach very high levels of proficiency, pronunciation tends to be non-native. This lack of native pronunciation in adult learners is explained by thecritical period hypothesis. When a learner's speech plateaus, it is known asfossilization.
Also, when people learn a second language, the way they speak their first language changes in subtle ways. These changes can be with any aspect of language, from pronunciation and syntax to the gestures the learner makes and the language features they tend to notice.[13]For example, French speakers who spoke English as a second language pronounced the /t/ sound in French differently from monolingual French speakers.[14]This kind of change in pronunciation has been found even at the onset of second-language acquisition; for example, English speakers pronounced the English /p t k/ sounds, as well as English vowels, differently after they began to learn Korean.[15]These effects of the second language on the first ledVivian Cookto propose the idea ofmulti-competence, which sees the different languages a person speaks not as separate systems, but as related systems in their mind.[16]A 2025 study found that adult learners can attune to the prosody of a new language after brief exposure, but that concurrent exposure to orthography—especially deep or unfamiliar scripts—hampers this ability. This suggests that difficulties with second-language prosody may be influenced by learning conditions, not just age-related factors.[17]
Originally, attempts to describe learner language were based oncomparing different languagesoranalyzing learners' errors.[18]However, these approaches could not fully predict all the errors learners make during the process of acquiring a second language. To address this limitation and explain learners’ systematic errors, the concept ofinterlanguagewas introduced.[19]Interlanguage refers to the linguistic system that emerges in the minds of second language learners. It is not considered a defective version of the target language riddled with random errors, nor is it purely a result of errors transferred from the learner’s first language. Instead, it is viewed as a language in its own right, with its own systematic rules.[20]Most aspects of language—syntax,phonology,lexicon, andpragmatics—can be analyzed from the perspective of interlanguage. For more detailed information, please refer to the main articles onInterlanguage.
In the 1970s, several studies investigated the order in which learners acquired different grammatical structures.[note 2]These studies showed that there was little change in this order among learners with different first languages. Furthermore, it showed that the order was the same for adults and children and that it did not even change if the learner had language lessons. This supported the idea that there were factors other than language transfer involved in learning second languages and was a strong confirmation of the concept of interlanguage.
However, the studies did not find that the orders were the same. Although there were remarkable similarities in the order in which all learners learned second-language grammar, there were still some differences between individuals and learners with different first languages. It is also difficult to tell when exactly a grammatical structure has been learned, as learners may use structures correctly in some situations but not in others. Thus it is more accurate to speak ofsequencesof acquisition, in which specific grammatical features in a language are acquired before or after certain others but the overall order of acquisition is less rigid.
Recent studies have shown that universality and individuality coexist in the order of grammatical item acquisition.[22]For example, items such as articles, tense, and the progressive aspect are particularly challenging for learners whose native languages, like Japanese and Korean, do not explicitly express these features. On the other hand, items like the third-person singular -s tend to be less influenced by the learner's native language. In contrast, articles and the progressive -ing have been confirmed to be strongly affected by the learners' native language. For more detailed information, please refer to the main articles onOrder of acquisition.
Learnability has emerged as a theory explaining developmental sequences that crucially depend on learning principles, which are viewed as fundamental mechanisms of interlanguage language acquisition within learnability theory.[23]Some examples of learning principles include the uniqueness principle and the subset principle. The uniqueness principle refers to learners' preference for a one-to-one mapping between form and meaning, while the subset principle posits that learners are conservative in that they begin with the narrowest hypothesis space that is compatible with available data. Both of these principles have been used to explain children's ability to evaluate grammaticality despite the lack of explicit negative evidence. They have also been used to explain errors in SLA, as the creation of supersets could signal over-generalization, causing acceptance or production of ungrammatical sentences.[24]
Pienemann'steachability hypothesisis based on the idea that there is a hierarchy of stages of acquisition and instruction in SLA should be compatible with learners' current acquisitional status.[25]Recognizing learners' developmental stages is important as it enables teachers to predict and classify learning errors. This hypothesis predicts that L2 acquisition can only be promoted when learners are ready to acquire given items in a natural context. One goal of learnability theory is to figure out which linguistic phenomena are susceptible to fossilization, wherein some L2 learners continue to make errors despite the presence of relevant input.
Although second-language acquisition proceeds in discrete sequences, it does not progress from one step of a sequence to the next in an orderly fashion. There can be considerable variability in features of learners' interlanguage while progressing from one stage to the next.[26]For example, in one study byRod Ellis, a learner used both "No look my card" and "Don't look my card" while playing a game of bingo.[27]A small fraction of variation in interlanguage isfree variation, when the learner uses two forms interchangeably. However, most variation issystemic variation, a variation that depends on thecontextof utterances the learner makes.[26]Forms can vary depending on the linguistic context, such as whether the subject of a sentence is a pronoun or a noun; they can vary depending on social contexts, such as using formal expressions with superiors and informal expressions with friends; and also, they can vary depending on the psycholinguistic context, or in other words, on whether learners have the chance to plan what they are going to say.[26]The causes of variability are a matter of great debate among SLA researchers.[27]
One important difference between first-language acquisition and second-language acquisition is that the process of second-language acquisition is influenced by languages that the learner already knows. This influence is known aslanguage transfer.[note 3]Language transfer is a complex phenomenon resulting from the interaction between learners’ prior linguistic knowledge, the target language input they encounter, and their cognitive processes.[28]Language transfer is not always from the learner’s native language; it can also be from a second language or a third.[28]Neither is it limited to any particular domain of language; language transfer can occur in grammar, pronunciation, vocabulary, discourse, and reading.[29]
For more detailed information, please refer to the main articles on and Language transfer andCrosslilnguistic influence.
Much modern research in second-language acquisition has taken a cognitive approach.[30]Cognitive research is concerned with the mental processes involved in language acquisition, and how they can explain the nature of learners' language knowledge. This area of research is based in the more general area ofcognitive scienceand uses many concepts and models used in more general cognitive theories of learning. As such, cognitive theories view second-language acquisition as a special case of more general learning mechanisms in the brain. This puts them in direct contrast with linguistic theories, which posit that language acquisition uses a unique process different from other types of learning.[31][32]
The dominant model in cognitive approaches to second-language acquisition, and indeed in all second-language acquisition research, is the computational model.[32]The computational model involves three stages. In the first stage, learners retain certain features of the language input in short-term memory. (This retained input is known asintake.) Then, learners convert some of this intake into second-language knowledge, which is stored in long-term memory. Finally, learners use this second-language knowledge to produce spoken output.[33]Cognitive theories attempt to codify both the nature of the mental representations of intake and language knowledge and the mental processes that underlie these stages.
In the early days of second-language acquisition research oninterlanguagewas seen as the basic representation of second-language knowledge; however, more recent research has taken several different approaches in characterizing the mental representation of language knowledge.[34]Some theories hypothesize that learner language is inherently variable,[35]and there is the functionalist perspective that sees the acquisition of language as intimately tied to the function it provides.[36]Some researchers make the distinction betweenimplicitandexplicitknowledge, and some betweendeclarativeandprocedurallanguage knowledge.[37]There have also been approaches that argue for adual-mode systemin which some language knowledge is stored as rules and other language knowledge as items.[38]
From the early days of the discipline, researchers have also acknowledged that social aspects play an important role.[39]There have been many different approaches to the sociolinguistic study of second-language acquisition.[40]Common to each of these approaches, however, is a rejection of language as a purely psychological phenomenon; instead, sociolinguistic research views the social context in which language is learned as essential for a proper understanding of the acquisition process.[41]
Ellis identifies three types of social structures that affect the acquisition of second languages: sociolinguistic settings, specific social factors, and situational factors.[42]Sociolinguistic setting refers to the role of the second language in society, such as whether it is spoken by a majority or a minority of the population, whether its use is widespread or restricted to a few functional roles, or whether the society is predominantly bilingual or monolingual.[43]Ellis also includes the distinction of whether the second language is learned in a natural or an educational setting.[44]Specific social factors that can affect second-language acquisition include age, gender, social class, and ethnic identity, with ethnic identity being the one that has received most research attention.[45]Situational factors are those that vary between each social interaction. For example, a learner may use more polite language when talking to someone of higher social status, but more informal language when talking with friends.[46]
A learner's sense of connection to their in-group, as well as to the community of the target language emphasizes the influence of the sociolinguistic setting, as well as social factors within the second-language acquisition process.Social Identity Theoryargues that an important factor for second language acquisition is the learner's perceived identity to the community of the language being learned, as well as how the community of the target language perceives the learner.[47]Whether or not a learner feels a sense of connection to the community or culture of the target language helps determine their social distance from the target culture. A smaller social distance is likely to encourage learners to acquire the second language, as their investment in the learning process is greater. Conversely, a greater social distance discourages attempts to acquire the target language. However, negative views not only come from the learner, but the community of the target language might feel greater social distance from the learner, limiting the learner's ability to learn the language.[47]Whether or not bilingualism is valued by the culture or community of the learner is an important indicator of the motivation to learn a language.[48]
There have been several models developed to explain social effects on language acquisition. Schumann'sacculturation modelproposes that learners' rate of development and ultimate level of language achievement is a function of the "social distance" and the "psychological distance" between learners and the second-language community. In Schumann's model, the social factors are most important, but the degree to which learners are comfortable with learning the second language also plays a role.[49]Another sociolinguistic model is Gardner'ssocio-educational model, which was designed to explain classroom language acquisition. Gardner's model focuses on the emotional aspects of SLA, arguing that positive motivation contributes to an individual's willingness to learn L2; furthermore, the goal of an individual to learn an L2 is based on the idea that the individual has a desire to be part of a culture, in other words, part of a (the targeted language) mono-linguistic community. Factors, such asintegrativenessandattitudes towards the learning situationdrive motivation. The outcome of positive motivation is not only linguistic but non-linguistic, such that the learner has met the desired goal. Although there are many critics of Gardner's model, nonetheless many of these critics have been influenced by the merits that his model holds.[50][51]Theinter-group modelproposes "ethnolinguistic vitality" as a key construct for second-language acquisition.[52]Language socializationis an approach with the premise that "linguistic and cultural knowledge areconstructedthrough each other",[53]and saw increased attention after the year 2000.[54]Finally, Norton's theory ofsocial identityis an attempt to codify the relationship between power, identity, and language acquisition.[55]
A unique approach to SLA is sociocultural theory. It was originally developed byLev Vygotskyand his followers.[56]Central to Vygotsky's theory is the concept of azone of proximal development(ZPD). The ZPD notion states that social interaction with more advanced target language users allows one to learn a language at a higher level than if they were to learn a language independently.[57]Sociocultural theory has a fundamentally different set of assumptions to approaches to second-language acquisition based on the computational model.[58]Furthermore, although it is closely affiliated with other social approaches, it is a theory of mind and not of general social explanation of language acquisition. According to Ellis, "It is important to recognize... that this paradigm, despite the label 'sociocultural' does not seek to explain how learners acquire the cultural values of the L2 but rather how knowledge of an L2 is internalized through experiences of a sociocultural nature."[58]
Linguistic approaches to explaining second-language acquisition spring from the wider study of linguistics. They differ from cognitive approaches and sociocultural approaches in that they consider linguistic knowledge to be unique and distinct from any other type of knowledge.[31][32]The linguistic research tradition in second-language acquisition has developed in relative isolation from the cognitive and sociocultural research traditions, and as of 2010 the influence from the wider field of linguistics was still strong.[30]Two main strands of research can be identified in the linguistic tradition:generative approachesinformed byuniversal grammar, and typological approaches.[59]
Typological universalsare principles that hold for all the world's languages. They are found empirically, by surveying different languages and deducing which aspects of them could be universal; these aspects are then checked against other languages to verify the findings. Theinterlanguagesof second-language learners have been shown to obey typological universals, and some researchers have suggested that typological universals may constrain interlanguage development.[60]
The theory of universal grammar was proposed byNoam Chomskyin the 1950s and has enjoyed considerable popularity in the field of linguistics. It focuses on describing thelinguistic competenceof an individual. He believed that children not only acquire language by learning descriptive rules of grammar; he claimed that childrencreativelyplay and form words as they learn language, creating meaning for the words, as opposed to the mechanism of memorizing language.[61]The "universals" in universal grammar differ from typological universals in that they are a mental construct derived by researchers, whereas typological universals are readily verifiable by data from world languages.[60]
Universal grammar theory can account for some of the observations of SLA research. For example, L2 users often display knowledge about their L2 that they have not been exposed to.[62]L2 users are often aware of ambiguous or ungrammatical L2 units that they have not learned from any external source, nor their pre-existing L1 knowledge. This unsourced knowledge suggests the existence of a universal grammar. Another piece of evidence that generative linguists tend to use is thepoverty of the stimulus, which states that children acquiring language lack sufficient data to fully acquire all facets of grammar in their language, causing a mismatch between input and output.[63]The fact that children are only exposed to positive evidence yet have intuition about which word strings are ungrammatical may also be indicative of universal grammar. However, L2 learners have access to negative evidence as they are explicitly taught about ungrammaticality through corrections or grammar teaching.[63]
Individual factors, such as language aptitude, age, strategy use, motivation, and personality, play a significant role in second-language acquisition. For example, the critical period hypothesis explores how age affects language learning ability, while motivation is often categorized into intrinsic and extrinsic types. Personality traits, such as introversion and extroversion, and the use of effective learning strategies can also influence language acquisition outcomes. For more detailed information, see theIndividual variation in second-language acquisitionarticle.
Second-language attrition refers to the loss of proficiency in a language that was previously acquired, often due to a lack of use or exposure.[47]Factors influencing attrition include the level of initial proficiency, age, social circumstances, and motivation.[64]A learner's L2 is not suddenly lost with disuse, but its communicative functions are slowly replaced by those of the L1.[64]
Similar to second-language acquisition, second-language attrition occurs in stages. However, according to the regression hypothesis, the stages of attrition occur in reverse order of acquisition. With acquisition, receptive skills develop first, and then productive skills, and with attrition, productive skills are lost first, and then receptive skills.[64]
As stated at the beginning of this article, second language acquisition (SLA) research is the scientific discipline devoted to studying that process. Consequently, research that evaluates the effectiveness of teaching methods is often not considered part of SLA research. Nevertheless, there have been attempts to apply SLA research findings to teaching methods, and this area is referred to as classroom second language acquisition or instructed second language acquisition(ISLA). In particular, this kind of research has a significant overlap withlanguage education, and it is mainly concerned with the effect that instruction has on the learner. Moreover, it also explores what teachers do, the classroom context, and the dynamics of classroom communication. Notably, it is both qualitative and quantitative research. However, there are several challenges faced by second-language learners during practical training, especially regarding training environments, aligning tasks with learning objectives, and cultural and economic influences.[65]
Cited inEllis 1994It is generally agreed that pedagogy restricted to teaching grammar rules and vocabulary lists does not give students the ability to use the L2 with accuracy and fluency. Rather, to become proficient in the second language, the learner must be given opportunities to use it for communicative purposes.[66][67]
Numerous theories have been proposed not only to describe the phenomena of SLA but also to explain them by uncovering the underlying mechanisms. Despite differing perspectives, these research approaches share a common goal: contributing to the identification of conditions that facilitate effective language acquisition. Recognizing the contributions of each perspective and fostering interdisciplinary connections, researchers have increasingly sought to understand the complex process of second language acquisition from a broader perspective in recent years. These efforts go beyond the limitations of explaining SLA through a single theory, paving the way for a more comprehensive and multilayered understanding.
Major journals of the field includeSecond Language Research,Language Learning, Studies in Second Language Acquisition,Applied Linguistics,Applied Psycholinguistics,International Review of Applied Linguistics in Language Teaching,International Journal of Applied Linguistics,System,Journal of Second Language Studies, andJournal of the European Second Language Association.
|
https://en.wikipedia.org/wiki/Second-language_acquisition
|
Smiginis aconversation-basedlanguage learning platformavailableonlineand as amobile applicationforiOSandAndroid. As of March 2016, Smigin has two products: Smigin,[1]a language-learning website and Smigin Travel,[2]a mobile translation app. Smigin's language learning site offers 4 different destination languages across 3 source languages, with several languages in development. Smigin Travel, exclusively designed for themobile interface, is available on iOS and Android in 11 languages with users in over 175 countries.
Smigin was founded by Irish entrepreneur Susan O’Brien as an alternative to traditional language learning products and methods that focus on rigid grammar structures rather than conversational language. After moving toPortugaland having to learnPortugueseas a complete beginner,[3]O’Brien – aUniversity College Corkgraduate who speakssix languages– decided to develop alanguage learningsolution that puts the user in control of their learning and focuses on what users wants to learn, with primary emphasis on conversational language.[4]
Smigin Travel, the company's first product, launched on theApp Storeon February 17, 2014.[5]Smigin Travel is a multilingual, multi-directional “phrasebuilder”[6]tool that enables travelers to speak a foreign language. Itsuser interfaceenables users to build and translate phrases across location-based sections such as Café, Restaurant, Hotel etc. The app also features phonetic spelling and audio recorded by native language speakers to help users pronounce their phrase.
With the support ofGooglein New York City, Smigin began to optimize Smigin Travel for Android in mid-2015 and in August 2015 released theAndroidversion on theGoogle Play Store.[7]Smigin Travel isfree to download, and users have the ability to unlock additional content throughin-app purchases.[8]Initial support for the Smigin Travel app grew quickly and the company amassed users in 175 countries. The app is available inEnglish,Spanish,French,Italian,Portuguese,Polish,SwedishandHaitian Creole.[9][10]
On September 29, 2015, Smigin launched a month-longKickstarter[11]campaign with a funding goal of $20,000 for its second product named after the company. On October 27, 2015, Smigin successfully passed its funding goal and on October 29, 2015, Smigin acquired upwards of $20,000 to invest in further product development for its second product.
In March 2016, the company released its eponymous second product, Smigin – abrowser-basedlanguage learning solution for beginners with an emphasis on conversational skills. Smigin launched inSpanish,Italian,PortugueseandEnglish as a Second Language. Additional languages will be added throughout 2016, includingAsian languages.
Smigin is listed byDigital NYCas aMade in NYstartup.[12]
Smigin's patent-pending methodology[13]employs a three-step approach to speaking a foreign language:[14]
Curated content:All Smigin content has been curated for location-based or situational scenarios to reflect useful language for real life application. Users pick sections to access relevant words and phrases they need to say at any given time.Built-in grammar: Smigin's “built-ingrammar" uses the infinitive form of verbs so users can create a wide variety of sentences without conjugations.Real-life language: By gaining instant access to relevant words and phrases, Smigin empowers users to “break down language barriers” and become confident travelers by engaging in conversation. Users are encouraged to travel and apply their new skills to interactions with foreign locals.
Smigin's language courses[15]focus on conversation as the center of the user's learning. Each course is segmented into context-based sections such as Meet & Greet, Café, and Hotel. Most sections are based on situations in which language beginners may find themselves abroad, and are divided into three levels: Words, Phrases, and Chat.[16]Instead of forcing users into a fixed learning structure that involvesrote memorizationand grammar drills, Smigin allows users to freely navigate the platform. Users can also access reference sessions that provide additional vocabulary words, as well as detailed explanations of the language'sgrammar rules.
Chat allows users tosimulateshort conversations with a virtual “local” to prepare for real-life conversations with locals abroad. During Chat, users enhance their conversational skills byresponding to questionsand prompts. Chat'stext-inputinterface demonstrates the variety of conversations a user can have in another language without an in-depth knowledge of verb conjugations and varied grammatical structures.
Users are encouraged to start a section by learning relevant words and phrases related to that section. Phrases use the Smigin Method of enabling the user to create simple sentences without worrying about grammar; to add visual context, all videos were shoton-locationin countries where the language is spoken (e.g.Spain,Italy,Portugal). Users can listen tonative speakeraudio, view the phonetic pronunciation of each word or phrase, and translate to and from their native language throughout these levels.
Smigin Travel is a mobile application for iOS and Android that enables users to build and translate phrases in multiple languages while abroad.[17]Smigin Travel allows users to create phrases and ask questions based on various situations in which they will find themselves on their travels.[18]
As of March 2016, the sections consisted of: Café, Bar, Restaurant, Hotel, Shopping, Emergency, Getting Around, Transport, Business, and Destinations.[19]Each section features sentence starters, pre-conjugated verbs, and sentence objects that together form phrases and questions based on the respective section.[20]
Users have the ability to hear their phrases spoken bynative speakers, read simplifiedphoneticspelling to learn how best to pronounce them using their native syllables, and save phrases for later use. Additionally, users have the ability to create their own personal phrasebook that suits the needs of their travel experiences.
As of March 2016, Smigin Travel amassed users in over 175 countries.[21]
|
https://en.wikipedia.org/wiki/Smigin
|
SuperMemo(from "Super Memory") is alearningmethod andsoftwarepackage developed by SuperMemo World and SuperMemo R&D withPiotr WoźniakinPolandfrom 1985 to the present.[2]It is based on research intolong-term memory, and is a practical application of thespaced repetitionlearning method that has been proposed for efficient instruction by a number of psychologists as early as in the 1930s.[3]
The method is available as a computer program forWindows,Windows CE,Windows Mobile(Pocket PC),Palm OS(PalmPilot), etc. Course software by the same company (SuperMemo World) can also be used in aweb browseror even without a computer.[4]
The desktop version of SuperMemo started as aflashcardsoftware (SuperMemo 1.0 (1987)).[5]Since SuperMemo 10 (2000), it began to supportincremental reading.[6]
The SuperMemo program stores a database of questions and answers constructed by the user. When reviewing information saved in the database, the program uses the SuperMemo algorithm to decide what questions to show the user. The user then answers the question and rates their relative ease of recall - with grades of 0 to 5 (0 is the hardest, 5 is the easiest) - and their rating is used to calculate how soon they should be shown the question again. While the exact algorithm varies with the version of SuperMemo, in general, items that are harder to remember show up more frequently.[2]
Besides simple text questions and answers, the latest version of SuperMemo supports images, video, and HTML questions and answers.[7]
Since 2000,[6]SuperMemo has had a unique set of features that distinguish it from other spaced repetition programs, calledincremental reading(IR or "increading"[8]). Whereas earlier versions were built around users entering information they wanted to use, using IR, users can import text that they want to learn from. The user reads the text inside of SuperMemo, and tools are provided to bookmark one's location in the text and automatically schedule it to be revisited later, extract valuable information, and turn extracts into questions for the user to learn. By automating the entire process of reading and extracting knowledge to be remembered all in the same program, time is saved from having to manually prepare information, and insights into the nature of learning can be used to make the entire process more natural for the user. Furthermore, since the process of extracting knowledge can often lead to the extraction of more information than can actually be feasibly remembered, a priority system is implemented that allows the user to ensure that the most important information is remembered when they can't review all information in the system.[9]
The specific algorithms SuperMemo uses have been published, and re-implemented in other programs.
Different algorithms have been used; SM-0 refers to the original (non-computer-based) algorithm, while SM-2 refers to the original computer-based algorithm released in 1987 (used in SuperMemo versions 1.0 through 3.0, referred to as SM-2 because SuperMemo version 2 was the most popular of these).[10][11]Subsequent versions of the software have claimed to further optimize the algorithm.
Piotr Woźniak, the developer of SuperMemo algorithms, released the description for SM-5 in a paper titledOptimization of repetition spacing in the practice of learning.Little detail is specified in the algorithms released later than that.
In 1995, SM-8, which capitalized on data collected by users of SuperMemo 6 and SuperMemo 7 and added a number of improvements that strengthened the theoretical validity of the function of optimum intervals and made it possible to accelerate its adaptation, was introduced in SuperMemo 8.[12]
In 2002, SM-11, the first SuperMemo algorithm that was resistant to interference from the delay or advancement of repetitions was introduced in SuperMemo 11 (aka SuperMemo 2002). In 2005, SM-11 was tweaked to introduce boundaries on A and B parameters computed from the Grade vs. Forgetting Index data.[12]
In 2011, SM-15, which notably eliminated two weaknesses of SM-11 that would show up in heavily overloaded collections with very large item delays, was introduced in Supermemo 15.[12]
In 2016, SM-17, the first version of the algorithm to incorporate the two component model of memory, was introduced in SuperMemo 17.[13]
The latest version of the SuperMemo algorithm is SM-18, released in 2019.[14]
The first computer-based SuperMemo algorithm (SM-2)[11]tracks three properties for each card being studied:
Every time the user starts a review session, SuperMemo provides the user with the cards whose last review occurred at leastIdays ago. For each review, the user tries to recall the information and (after being shown the correct answer) specifies a gradeq(from 0 to 5) indicating a self-evaluation the quality of their response, with each grade having the following meaning:
The following algorithm[15]is then applied to update the three variables associated with the card:
After all scheduled reviews are complete, SuperMemo asks the user to re-review any cards they marked with a grade less than 4 repeatedly until they give a grade ≥ 4.
Some of the algorithms have been re-implemented in other, oftenfreeprograms such asAnki,Mnemosyne, andEmacs Org-mode's Org-drill. See fulllist of flashcard software.
The SM-2 algorithm has proven most popular in other applications, and is used (in modified form) in Anki and Mnemosyne, among others. Org-drill implements SM-5 by default, and optionally other algorithms such as SM-2 and a simplified SM-8.
|
https://en.wikipedia.org/wiki/SuperMemo
|
Tandem language learningis an approach tolanguage acquisitionthat involves reciprocallanguage exchangebetween tandem partners. In this method, each learner ideally serves as a native speaker of the language the other person intends to learn. Tandem language learning deviates from traditional pedagogical practices by eliminating the teacher-student model. Numerouslanguage schoolsworldwide, including those affiliated with TANDEM International,[1]as well as several universities, incorporate this approach into their language programs.
Tandem language learning encompasses various methods of instruction. The most prevalent form involves face-to-face meetings between participants (referred to as face-to-face tandem). With the advent of communication technology in the 1990s, etandem (also known as distance tandem) emerged, facilitating language practice through email correspondence and written communication. Tele-collaboration emphasizes cultural integration and intercultural understanding as integral components of language learning. Tandem exchanges are characterized by reciprocal autonomy,[2]with participants engaging in mutual language learning. Time is equally divided to ensure a fair distribution of language exchange.[2]For instance, a Portuguese speaker and a German speaker may converse in German for half an hour and then switch to Portuguese for the remaining half an hour. Through partnerships with native speakers and exposure to social and cultural experiences, participants become fully immersed in the target language and culture. Learning is supported through various means, such as worksheets, textbooks, or informal conversations. The tandem method serves different purposes, including self-directed tandem partnerships (involving two individuals supported by counselors) and binational tandem courses (designed for groups and facilitated by moderators). The prerequisite for participating in self-directed Tandem is a lower intermediate level of language proficiency (lower B1 threshold). The can-do statements outlined in the Common European Framework of Reference for Languages (CEFR)[3]provide a clear description of language ability at the B1 threshold[4]in several European languages.
The concept of "language learning by exchange" or the "tandem approach" encompasses various teaching systems for exchange students abroad, including partner learning, peer teaching, tutoring models, and "Zweierschaften" (Steinig) or 'one-on-one discipleship'.[5]
Here are some key points:[6][7]
In the early 19th century,Joseph LancasterandAndrew Bellintroduced the "mutual system" in England, which involved students assisting each other in school, complementing the teacher's role. Peter Petersen, a German educationalist, developed a similar approach in the "Jenaplan schools," and tutoring models inspired by this concept emerged in the USA from the 1960s onwards.
The "tandem" concept, where two individuals learn the same language together, first appeared in 1971 in connection with Wambach's "audio-visual method." It was later applied to binational German-French youth meetings.[8]
Klaus Lieb-Harkort and Nükhet Cimilli introduced this model in their work with immigrants in the German-Turkish area of Munich. Similar courses were subsequently offered in Bremen, Frankfurt, and Zürich.
In 1979, Jürgen Wolff developed the tandem learning partner mediation for Spanish and German. This course program, along with one developed by Wolff and colleagues in Madrid, formed the foundation of the TANDEM network, which later became the TANDEM schools network.[9]
Since 1983, the TANDEM model has been adopted as an alternative language learning method, with elements of language courses abroad, youth exchange programs, cultural tours, class correspondence, and other cross-border activities replicated in selective schools across Europe.
The TANDEM network collaborates with various educational institutions, including the E-Tandem Network,[10]which was founded in 1992 and later renamed the International E-Mail Tandem Network in 1993.
TANDEM Fundazioa,[11]headquartered in Donostia/San Sebastian, Spain, was established in 1994 to promote scientific cooperation, education, and advanced training.
In 2016, Tripod Technology GmbH obtained a license from TANDEM Fundazioa to create theTandem app.[12]
The majority of schools affiliated with the TANDEM Network formed the association 'TANDEM International,"[1]with its headquarters in Bremen, Germany. Since March 2014, TANDEM International has owned the 'TANDEM' brand.
Initially, there was a significant focus on the effectiveness of tandem language learning compared to traditional teaching methods. To investigate this, a study was conducted in 1983 at the Madrid Goethe-Institute. Tandem pairs, a tandem course, and teacher-guided phases were interconnected, and the linguistic progress of the participants was compared to that of a control group who were also preparing for the 'Zertifikat DaF." The results indicated that the tandem participants demonstrated better listening comprehension and speaking skills, although they were less successful in reading and writing. Overall, their performance in the certification was on par with the control group. Another benefit observed was the mutual correction of mistakes, which was facilitated by increased exposure to the language.[citation needed]
Tandem language learning encompasses not only language comprehension and learning but also cultural understanding and knowledge. Consequently, when analyzing the competence component, it is essential to consider this aspect as well. Tandem learning facilitates a change in perspective by allowing participants to compare their own viewpoints with those of others. Through its natural exposure to the native speaker's culture, Tandem provides a relaxed and inviting environment for engagement. The autonomous nature of language exchange enables participants to experience different worldviews, fostering attitudes of respect, openness, curiosity, and discovery. This aspect is particularly beneficial in translator training. Additionally, native speakers also report an increased awareness of their own language throughout the tandem process, making it a valuable confidence booster in learning contexts.[citation needed]
The Cormier method, developed by Helene Cormier, a language teacher at theClub d'échange linguistique de Montréal(CELM), is an instructional approach that promotes in-tandem learning among small groups of learners with different native languages.[13]The method focuses on engaging participants in conversations aimed at strengthening listening, comprehension, vocabulary, and pronunciation skills.
During the language exchange, participants have the opportunity to interact with native speakers through text, voice, and video chat. Each session typically lasts around one hour, with participants speaking in one language for thirty minutes and then switching to the other language for the remaining thirty minutes. This experience allows learners to gain insights into their peers' cultures while using the target language appropriately.
To conduct effective sessions using the Cormier method, the following recommendations should be considered:[14]
Advantages of the Cormier method include the opportunity for focused practice in small groups, pre-designed lesson plans and engaging activities to enhance motivation, real-time communication with native speakers, and the ability to access sessions from anywhere with an internet connection. A virtual timer helps manage and allocate practice time for each participant.
However, there are some disadvantages to consider. The method is more suitable for intermediate and advanced learners, as native speakers without teaching backgrounds may struggle to assist beginners. Additionally, participants from different educational backgrounds and levels of knowledge may encounter communication challenges in communication. Accessibility can also be an issue in certain countries.
It's important to note that while the Cormier method is beneficial for practice, it should not be relied upon as the sole source of language learning. Instead, it should be seen as a supplementary tool to help learners improve their language skills.
The Cormier method has demonstrated success, particularly when utilizing tools like Skype. Implementing this method is relatively straightforward, although the discussed drawbacks should be taken into account. As new technologies continue to emerge, different and improved approaches to tandem learning may further enhance its effectiveness. Alternative digital tools such as Google Hangouts, Viber, ooVoo, WeChat, and others can broaden access and provide additional opportunities for e-tandem learning and telecollaboration, leading to continued growth and advancement in language learning.
Tandem language learning is a concept that offers potential linguistic and cultural advantages. It allows students of different nationalities to learn from each other without any financial cost. However, there are several factors that can hinder its effectiveness.[15]
One reason is the limited availability of foreign students interested in studying minority languages, such as Polish or Maltese. Even if speakers of minority languages are interested in learning more widely spoken languages like English or German, they may struggle to find tandem partners who share their interest. Minority languages often have limited demand in the global market forforeign languages.[15]
Another challenge is the expertise of participants, which can be influenced by two factors. First, native speakers may lack sufficient knowledge to effectively teach their own language. Second, students themselves may face difficulties in designing meaningful learning experiences due to a lack of methodological and pedagogical skills.[15]Error correction during tandem programs[16]can also disrupt the flow of conversation and create anxieties for novice learners, impacting their fluency and confidence in the foreign language.[17]
The design of tasks and integration of online language interaction within the learning process and curriculum can significantly impact the effectiveness of tandem language learning. Poorly designed tasks and a lack of pedagogical leadership can diminish the value of the approach for both students and teachers.[18]
Technology also poses challenges. Certain conferencing technologies, likeSkype, may result in miscommunication due to non-alignment of visual input and output. Students may appear socially absent or interrupt the usual process of indicating social presence, affecting communication. Misusing technology can lead to exclusion from the conversation.
Cultural issues can arise during tandem programs when comparing cultures. Students may express subjective opinions and reinforce intercultural stereotypes, creating a hostile discourse and disrupting the flow of conversations. Without teacher interventions, tele-tandem interactions may become shallow performances that rely on preconceived representations of oneself and others.[19]Preconceptions about the other learner's culture can also impact proactive attitudes and participation levels in the exchange.[18]
Addressing these challenges requires careful consideration and pedagogical support to ensure that tandem language learning maximizes its potential benefits while minimizing potential drawbacks.
|
https://en.wikipedia.org/wiki/Tandem_language_learning
|
Virtual exchange(also referred to asonline intercultural exchangeamong other names) is an instructional approach or practice for language learning. It broadly refers to the "notion of 'connecting' language learners in pedagogically structured interaction and collaboration"[1]throughcomputer-mediated communicationfor the purpose of improving their language skills, intercultural communicative competence,[2]anddigital literacies.[3]Although it proliferated with the advance of the internet andWeb 2.0technologies in the 1990s, its roots can be traced to learning networks pioneered byCélestin Freinetin 1920s[4]and, according to Dooly,[5]even earlier in Jardine's[who?]work with collaborative writing at theUniversity of Glasgowat the end of the 17th to the early 18th century.
Virtual exchange is recognized as a field ofcomputer-assisted language learningas it relates to the use of technology in language learning. Outside the field of language education, this type of pedagogic practice is being used to internationalize the curriculum and offer students the possibility to engage with peers in other parts of the world in collaborative online projects.[6]
Virtual exchange is based on sociocultural views of learning inspired byVygotskiantheories of learning as a social activity.[7]
Different names have been used to describe the practice, ranging from terms that usually describe a particular practice within the area, such as dual language virtual exchange which is sometimes calledteletandem, eTandem, and tandem language learning, to more generic terms such as globally virtual connections, online interaction and exchange, online intercultural exchange, online exchange, virtual exchange, virtual connections, globalvirtual teams, globally-networked learning environments, collaborative online international learning (COIL), Internet-mediated intercultural foreign language education. globally networked learning, telecollaboration, and telecollaboration 2.0.[8]Currently, it appears thatvirtual exchangeis the most prominent umbrella term, a term that can be used for a variety of models and practices.[9][8]
Likewise, depending on the aims and settings a variety of definitions have been applied to the practice. One of the most widely referenced definitions comes from Julie Belz,[who?]who defines it as a partnership in which "internationally-dispersed learners in parallel language classes use Internet communication tools such as e-mail, synchronous chat, threaded discussion, and MOOs (as well as other forms of electronically mediated communication), in order to support social interaction, dialogue, debate, and intercultural exchange."[10]As the practice is most common in language learning contexts, narrower definitions appeared as well, such as an "Internet-based intercultural exchange between people of different cultural/national backgrounds, set up in an institutional context with the aim of developing both language skills and intercultural communicative competence... through structured tasks."[11]
Conversely, broader definitions that go beyond educational contexts emerged as well, such as "the process of communicating and working together with other people or groups from different locations through online or digital communication tools (e.g., computers, tablets, cellphones) to co-produce a desired work output. Telecollaboration can be carried out in a variety of settings (classroom, home, workplace, laboratory) and can be synchronous or asynchronous."[5]
The origins of virtual exchange have been linked to the work of iEARN and the New York/Moscow Schools Telecommunications Project[12](NYS-MSTP) which was launched in 1988 by Peter Copen and the Copen Family Fund. This project stemmed from a perceived need to connect youth from the two countries during a time which was marked by tensions between theUnited Statesand theU.S.S.R.that had developed during theCold War. With the institutional support of the Academy of Sciences in Moscow, and the New York State Board of Education, a pilot programme between 12 schools in each nation was established. Students worked in both English and Russian on projects based on their curricula, which had been designed by participating teachers. The program expanded in the early 1990s to include China, Israel, Australia, Spain, Canada, Argentina, and the Netherlands. The early 1990s saw the establishment of the organization iEARN which became officially established in 1994. One of the earliest projects, which is still running, was Margaret Riel's Learning Circles.[13]The organization has since expanded and is currently active in over 100 countries and promotes many different projects, also in collaboration with other organizations such asThe My Hero Project. This form of education which aims to integrate awareness of international communities as part of the curriculum is sometimes referred to asglobal education.
In foreign language education the practice of virtually connecting learners is also commonly called Virtual Exchange but has sometimes been known astelecollaborationand is a sub-field ofcomputer-assisted language learning(CALL). It was first promoted as a form of network-based language learning in the 1990s through the work of educators such asMark Warschauer[14][15]and Rick Kern. One of the first uses of the word telecollaboration was in Warschauer's 1996 volume[16]that compiled works oncomputer-mediated communication(CMC) following the Symposium on Local and Global Electronic Networking in Foreign Language Learning and Research held at theUniversity of Hawaiʻiin 1995. The symposium brought together educators concerned with these issues from university and secondary education throughout the world. Telecollaborative practices at the time involved the use of e-mail and other Web 1.0 capabilities.[17]
Several different models of virtual exchange / telecollaboration have since been developed,[18]such as the Cultura model, developed in 1997 atMITin the United States,[19]and the dual language virtual exchange / eTandem model.[20][21]The Cultura project[22]was originally developed as a bilingual project for French and English, but has since been developed in several different languages.
In 2003 the organization Soliya was founded byLucas Welchand Liza Chambers in the aftermath of theSeptember 11 attacks. Soliya's Connect Program has become an important model of online facilitated dialogue and is based on principles ofintergroup dialogueandpeacebuilding. In this model of virtual exchange, students from universities across the globe are placed in diverse groups of 10–12 people, and they meet regularly for 2-hour sessions of dialogue through an over a period of 8 weeks. Each group is supported by one or two trained facilitators.
In 2004 the International Virtual Exchange Project (IVEProject) was begun by Eric Hagley. In 2015 this project was sponsored by a Japanese government kaken[23]grant which allowed it to expand greatly. The project has students studying foreign languages under the tutelage of teachers interact asynchronously and synchronously using the language they are studying. The most common exchange is done in English as the Lingua Franca. As of August 2024, over 60,000 students from 29 countries have participated.
In 2005 the European Commission established theeTwinningprogramme for schools. The programme promotes projects between schools in Europe which entail both synchronous and asynchronous collaborations between classes, offering a safe platform for staff (teachers, head teachers, librarians, etc.) working in a school in one of the European countries involved. Teachers who register in eTwinning are checked by the National Support Organisation (NSO) and are validated in order to use all eTwinning features such as TwinSpace and Project Diary, providing a safe andGDPRcompliant environment for students and teachers' interaction.
As per 2021, there are 122,134 active projects with 937,761 teachers in 217,830 schools in eTwinning countries as well as e-Twinning plus countries (Armenia,Azerbaijan,Georgia,Jordan,Lebanon,Republic of MoldovaandUkraine). A key element of eTwinning is collaboration among teachers, students, schools, parents, and local authorities. In eTwinning teachers organize activities that enable young learners to engage in communicative interaction with peers from other linguistic and cultural backgrounds in order to practise and further develop their intercultural communicative competence in their respective foreign (target) language. Students have an active role in co-creating the learning experience by interacting, investigating and making decisions whilst respecting each other thus learning 21st century skills. The use of TwinSpace facilitates a multimodal approach to collaboration which integrates tools to ensure communicative and pedagogic diversity and richness. eTwinning has established a strong community of teachers and organizes training for them.[24]
In 2006 theSUNYCenter for Collaborative Online International Learning (COIL) was established at SUNY'sPurchase College.[25]COIL developed from the work of faculty members who used technology to bring international students into their classrooms using technology. COIL's Founding Director was Jon Rubin, a film and new media professor at Purchase College. The COIL model is increasingly being recognized as a way for universities to internationalize their curricula.[26][27]In 2010 COIL joined the new SUNY Global Center in New York City and continued to expand its global network.
In 2011 the Virtual Exchange Coalition[28]was established in the united states to further the field of virtual exchange, bringing together important virtual exchange providers.[29]
The 1st International Conference on Telecollaboration in University Foreign Language Education was hold at the University of León in February, 2014.[30]It provided a broad overview of linguistic and intercultural telecollaboration and generated interest in how telecollaboration can contribute to general educational goals and digital literacies in higher education.[30]
In 2016 members of the INTENT consortium working in different disciplines in higher education around the globe launched the UNICollaboration platform at the Second Conference on Telecollaboration in Higher Education at Trinity College, Dublin. The aim was to support university educators and mobility coordinators to find partner classes, and to organise and run online intercultural exchanges for their students. This platform was one of the outputs of an EU-funded project and has over 1000 registered educators.
In 2016 theEuropean Commissionerfor Education, Culture, Youth and SportTibor Navracsicsannounced a futureErasmus+Virtual Exchange initiative. In March 2018 the Erasmus+ Virtual Exchange pilot project was officially launched by CommissionerNavracsicsand it targeted young people (aged 18–30) in EU andSouthern Mediterraneancountries. In the initial year of EVE, 7,450 participants were involved in virtual exchange[31]through different activities, each with several subprogrammes. An impact report[31]was published in 2018 evaluating the Erasmus+ Virtual Exchange project activities which ran from 1 January 2018 to 31 December 2018, and the effectiveness of the different models of Virtual Exchange in meeting the objectives set by theEuropean Commission(EC). The initiative is hosted on theEuropean Youth Portal. Different models of virtual exchange are promoted on the platform as well as training for educators to develop their own virtual exchange projects and training for young people to become Erasmus+ Virtual Exchange facilitators.
Several VE projects under Erasmus+ Key Action 3 (Support for policy reform, Priority 5, EACEA 41/2016) have focused since then intelecollaborationand virtual exchange practice and research. An example is the EVOLVE project (Evidence-Validated Online Learning through Virtual Exchange) which promotes virtual exchange as an innovative form of collaborative international learning across disciplines inhigher education(HE) institutions inEuropeand beyond. The project investigated the impact of virtual exchange on teachers' pedaogogical competences and pedagogical approach in HE from 1 January 2018 to 31 December 2020 and it was coordinated by theUniversity of Groningen,the Netherlands.[32][33][34]
In 2018, several higher education institutions active in the field of virtual exchange and an international virtual exchange coalition was created that started organizing international virtual exchange conferences (IVEC). The first such conference was scheduled for October 2019 inTacoma, WA, US. This inaugural IVEC 2019 conference, entitled "Advancing the field of online international learning", was co-organized by the SUNY COIL Center,DePaul University,Drexel University, East Carolina University, University of Washington Bothell, University of Washington Tacoma, and UNIcollaboration.
Guth and Helm (2010)[35]built on the pedagogy of telecollaboration by expanding on its traditional practices via incorporatingWeb 2.0tools in online collaborative projects. This enriched practice widely became known as telecollaboration 2.0.[35][36]Telecollaboration 2.0, being a completely new phase,[37]serves to achieve nearly the same goals of telecollaboration. A distinctive feature of telecollaboration 2.0, however, lies in prioritizing promoting the development and mastery of new online literacies.[35]Although telecollaboration and telecollaboration 2.0 are used interchangeably, the latter slightly differs in affording "a complex context for language education as it involves the simultaneous use and development" of intercultural competencies,[38]internationalizing classrooms and promoting authentic intercultural communication[38]among partnering schools and students.[35]
There are several different 'models' of Virtual Exchange / telecollaboration which have been extensively described in the literature.[21]The first models to be developed were based on the partnering of foreign language students with "native speakers" of the target language, usually by organizing exchanges between two classes of foreign language students studying one another's languages. The most well established models are the lingua franca virtual exchange, dual language virtual exchange / eTandem, the Cultura, andeTwinningmodels.
Lingua Franca models have developed due to the increased use of particular languages in international business, science and politics. The most common are the English as Lingua Franca Virtual Exchange (ELFVE) and Spanish as Lingua Franca Virtual Exchange (SLFVE).[39][40]This type of virtual exchange is now widely adopted by teachers who want their students to participate in international communities but know their students, who often do not have the socio-economic opportunities to do so, have little chance of physically traveling outside their regions. Teachers create tasks and opportunities for their students to use the language they are studying to interact with and learn from peers in other countries. Though not included in the table below the strengths of the LFVE model are that students are in language partnerships without power imbalances, are able to interact with peers from multiple different cultures, see the language they are studying from the perspective of others who are studying it and have more opportunities to use the language they are studying in real-world situations. Some believe the weakness of this model is that students see peers who are also studying the language as not being "ideal" and that there is the possibility of learning mistaken language use.
Dual language virtual exchange (DLVE) / eTandem, which developed from the face to face tandem learning approach, has been widely adopted by individual learners who seek partners on the many available educational websites which offer to help find partners and suggest activities for tandem partners to engage in. However, the DLVE / eTandem model has also been used for class-to-class telecollaboration projects where teachers establish specific objectives, tasks, and topics for discussion.[21]Theteletandemmodel[41]is based on DLVE and was developed in Brazil, but focuses on oral communication throughVOIPtools such as Skype and Google Hangouts. Until recent years, however, virtual exchange has had to use asynchronous communication tools.
The Cultura project was developed by teachers of French as a foreign language atMITin the late 1990s with the aim of making culture the focus of their foreign language class.[19]This model takes its inspiration from the words of the Russian philosopherMikhail Bakhtin: "It is only in the eyes of another culture that foreign culture reveals itself fully and profoundly ... A meaning only reveals its depths once it has encountered and come into contact with another foreign meaning" (as cited in Furstenberg, Levet, English, & Maillet, 2001, p. 58). Cultura is based on the notion and process of cultural comparison and entails students analysing cultural products in class with their teachers and interacting with students of the target languages and cultures through which they develop a deeper understanding of each other's culture, attitudes, representations, values, and frames of reference.[42]
The eTwinning project, which essentially is a network of schools and educators within the European Union and part of Eramus+, contrasts with its earlier counterparts in not setting specific guidelines apropos of language use, themes or structure.[43]This model serves as a broad platform for schools within the EU to exchange information and share materials online, and provides a virtual space for countless pedagogical opportunities where teachers and students collectively learn, communicate and collaborate using a foreign language.[43]Quintessentially, eTwinning has the following four objectives:
eTwinning has thus proven to be a strong model for telecollaboration in recent years, since it enables the authentic use of foreign language among virtual partners, i.e. teachers and students. Not surprisingly, eTwinning projects have become increasingly recognized at various educational institutions across the continent. Each of the telecollaborative models discussed above has its strengths and weaknesses:
Virtual exchange is a type of education program that uses technology to allow geographically separated people to interact and communicate. This type of activity is most often situated in educational programs (but is also found in some youth organizations) in order to increase mutual understanding,global citizenship, digital literacies, and language learning. Models of virtual exchange are also known astelecollaboration, online intercultural exchange, globally networked teaching and learning, collaborative online international learning (COIL). Non-profit organizations such as Soliya (founded byLucas Welch) and the Sharing Perspectives Foundation have designed and implement virtual exchange programs in partnership with universities and youth organizations.
In 2017 the European Commission celebrated 30 years of Erasmus mobility[47]and declared Erasmus+ as its most successful programme in terms of European integration and international outreach. In 2018, the Erasmus+ Virtual Exchange (EVE)[48]project was launched; a pilot project part of the Erasmus+ programme, with the aim to provide technology-led intercultural learning experiences for young people aged 18–30 in youth organisations and universities Europe and Southern Mediterranean countries.
Educational institutions such asState University of New York's COIL Center andDePaul Universityuse virtual exchange in higher education curricula to connect young people globally with a primary mission to help them grow in their understanding of each other's contexts (society, government, education, religion, environment, gender issues, etc.).
The complexities of the objectives of virtual exchange / telecollaboration ("telecollaborative tasks can and should integrate the development of language,intercultural competence, andonline literacies"[49]) can generate a series of challenges for educators and learners. O'Dowd and Ritter[50]categorized potential reasons for failed communication in telecollaborative projects, sub-dividing them into four levels which, as the researchers indicate, can also overlap and interrelate:
O'Dowd and Ritter[50]focus initially on the individual level of possible obstacles to full functionality in virtual exchange projects, specifically thepsychobiographicaland educational backgrounds of the virtual exchange partners as potential sources for dysfunctional communications, and in particular, on the following two primary aspects:
The concept ofintercultural communicative competence(ICC) was established by Byram[51]who stated that there are five dimensions (or '5 savoirs') that make an individual interculturally competent: a combination of skills of interpreting, relating, discovery and interaction, of attitudes, knowledge and critical awareness. Learners who embark on a virtual exchange project with immature intercultural communicative competences may struggle to carry out the tasks usefully.[citation needed]
Dissonance in terms of motivation, commitment levels and expectations are also potential sources of tension for learning partners. For example, long response times can be interpreted as a lack of interest, or short responses as unfriendliness.[52]
Solid teacher partnerships are essential to the success of virtual exchange and ideally should be constructed before the students embark on the project. According to O'Dowd and Ritter,[50]virtual exchange can be viewed as "a form of virtual team teaching which demands high levels of communication and cooperation with a partner whom they may not have met face to face". Furthermore, since virtual exchange has been devised as a vehicle both for linguistic and intercultural communication, educators as much as students must learn to be 'intercultural speakers' (Byram)[51]and avoid culturally inappropriate behaviors, typecasting, culture clashes and misunderstandings.
Teachers will be aware of the curricular needs of their own institution, however these are unlikely to match exactly the requirements of their partner institute. The themes and sequencing of the tasks must, therefore, be the result of a compromise which satisfies the curricular needs of both sides. Reaching compromises necessarily implies that the partners be willing to invest time and energy in the demands of planning, and that they are sensitive to the needs of others.[50]
Successful pair and group formation is crucial to successful dual language virtual exchange and, to a lesser extent, lingua franca virtual exchange, however factors such as age, gender or foreign language proficiency can impact projects substantially, leading to the difficult choice between leaving pairings and groupings to chance, or assigning partners according to a rationale, however challenging foreseeing compatibilities and incompatibilities might be.[50]
In virtual exchange projects, most of the attention tends to be focused on the online relationships, with the consequent risk of neglecting the local group. The local group is the context within which communication, interaction, negotiation and, thus, a large part of thelearning processtake place. Consequently, these relationships also require teacher guidance and monitoring.[50]
A comprehensive preparatory phase is an essential element in effective virtual exchange / telecollaborative projects. If teachers can forewarn learners of issues which may arise, they will be better equipped to deal with them and to protect the quality of the exchange. Potentially problematic areas include technical problems, a lack of information about one's partners and their environments, as well as partners' expectations not matching.[50]
Both the types of available technological tools and access to them can impact the relationship between partners. More sophisticated technological tools on one side can make less well-equipped virtual exchange partners feel at a disadvantage. Moreover, restrictions in accessibility can limit opportunities for partners to interact, with repercussions which can include the risk of giving the false impression of disinterest when a learner with limited technological access is less responsive than a partner who has unlimited access.[50]
O'Dowd and Ritter[50]include in their list of socio-institutional challenges the organization of the learners' general course of studies, and refer to Belz and Müller-Hartmann's[53]identification of four key areas which can influence the outcome of virtual exchanges / telecollaborations:
These differences can greatly affect the outcome of a project, as they can generate differing expectations regarding the volume of work, the meeting of deadlines, and so forth. O'Dowd and Ritter[50]also indicate the pairing of students whose main focus of academic interest may not be the same as a possible source of dysfunction, in addition to the impact of clashes of institutional policies and philosophies regulating all aspects of the learning and teaching processes.
Insociolinguistics, the concept ofprestigerefers to the regard accorded certain languages or forms of the same language, such asdialects. Since virtual exchange involvesintercultural communicative competencesas much as purely linguistic skills, O'Dowd and Ritter[50]remind us that virtual exchange / telecollaborative interactions can be negatively affected by prestige-based attitudes both to language and culture, which in turn can lead to therankingof one language and culture over the other, with repercussions on the virtual exchange partnership.
For dual language virtual exchange, a related problem is how native speakers of English tend to be presented as "language experts", while non-native speakers are "learners" who need exposure to native English. This approach assumes that native English will facilitate mutual understanding in most types of communicative situations. However, the use of idiomaticBritish EnglishorAmerican Englishmight cause several comprehension problems to non-native speakers who typically useEnglish as a lingua franca(ELF), a different function of English.
While non-native speakers will try to accommodate native speakers, native speakers should also try to understand how ELF works and accommodate non-native speakers.[54]Native speakers might be experts in idiomatic English (American, British,Australian, etc.), but non-native speakers are also experts when it comes to using ELF. The two groups of speakers can certainly learn from one another.
When two groups of students participate in dual language virtual exchange projects, it is wise to avoid positioning native speakers as authoritative language experts whose main role is to coach or tutor non-native speakers.
At this level, cultural differences relative to communicative behaviors, such as attitudes tosmall talk, can cause misunderstanding and impact virtual exchanges. According to O'Dowd and Ritter[50]these interactional divergences can occur within the following communicative domains:
Virtual exchange has evolved and become more diversified to reflect not only emergingpedagogiesand technologies over time but it has also adapted to reflect the changing globalized world. It is becoming recognized as a sustainable approach[56]toglobal citizenship educationand a form of 'internationalization at home'[57]
A considerable amount of research points to the benefits of virtual exchange or telecollaboration partnering. Not only do these partnerships improvelinguistic competence,[58][59][60]they also develophigher-order thinking skills[61]and contribute to the development of cross-cultural attitudes, knowledge, skills, and awareness.[62]Moreover, virtual exchange activities developdigital literacies[63]as well variousmultiliteracies.[49]
Recent years have also witnessed the emergence of partners using a foreign language such as English not only with native speakers, but also with other non-native speakers as alingua francain various virtual exchanges. Studies reveal that these virtual exchanges have equally produced positive results in terms of skills development.[64][65]
While integration of and research into various virtual exchange partnerships have mainly occurred at universities, what is also emerging is an exploration of virtual exchange integration into secondarylanguage education.[66]
O'Dowd and Lewis[67]report that initially, the majority of online exchanges occurred between Western classrooms based in North America and Europe, while the number of partnerships involving other continents and other languages remained small. This has changed since the development of projects such as iEarn and the International Virtual Exchange Project which includes teachers and students from non-western countries.
A trend that can be observed is that two models have generally guided the approaches adopted in virtual exchange or telecollaborative practice in foreign language learning. The first model, known as dual language virtual exchange or e-tandem,[20][21]focuses primarily on linguistic development which generally involves two native speakers of different languages communicating with each other to practise their target language.[68]These partners perform the role of peer-tutors providing feedback to each other and correcting errors in a digital environment. This model also emphasizeslearner autonomywhere partners are encouraged to take responsibility for creating the structure to the language exchange with minimal intervention from the teacher[68]
The second model, generally referred to as intercultural virtual exchange / telecollaboration, emerged with the pedagogical trends of 1990s and 2000s which placed more emphasis on intercultural and sociocultural elements of foreign language learning. This model differs from dual language virtual exchange / e-tandem in 3 ways:[68]
By the end of the 2010s, virtual exchange witnessed a move towards the integration of more informal immersive online environments andWeb 2.0 technologies. These tools and environments enabled partners to conduct collaborative tasks reflecting hobbies and interests such as jointly developed music or film projects.[68]Other joint tasks involve website design and development[69][70]as well as online games and discussion forums.[71]Four major types of technologies dominating virtual exchange practice have been identified by O'Dowd and Lewis:[67]
The multitude and array of environments have thus provided greater freedom of choice for intercultural virtual exchange partners.[68]Thorne[72]argues that although these may be considered motivating environments, they involve 'intercultural communication in the wild' and are 'less controllable' as a result (p. 144).
The introduction of more structured approaches and frameworks have therefore been witnessed as a trend since the 2010s.[68]The outcome of the INTENT project by the European Commission between 2011 and 2014 led to the creation of the UNICollaboration platform[73]which provides necessary resources for educators to set up structured virtual exchange partnerships in universities. The European Telecollaboration for Intercultural Language Acquisition (TILA)[74]is an example of a platform of resources for teachers dedicated to integrating structured virtual exchange programs into secondary education. The aim of the European project Telecollaboration for Intercultural Language Acquisition (TILA) was to improve the quality of foreign language teaching and learning processes by means of meaningful telecollaboration among peers. The TILA project was funded by the European Commission within the Lifelong Learning Programme (2013–2015) and continues since then. Six countries were represented in the TILA consortium:France,United Kingdom,Germany,Spain, theNetherlandsandCzech Republic, and each country collaborated with a secondary school and a (teacher training) university.
TeCoLa was also a project funded by the European Commission within theErasmus+programme that harnesses telecollaboration technologies and gamification for intercultural and content integrated foreign language teaching (CLIL). It addressed the emerging need in secondary foreign language education for developing intercultural communicative competence through the pedagogical integration of virtual exchanges and telecollaboration. TeCoLa deployedvirtual worlds,videoconferencingtools andgamificationto support virtual pedagogical exchanges between secondary school students throughout Europe. The TeCoLa tools include the TeCoLa Virtual World,BigBlueButtonvideo rooms, online tools for communication and collaboration, andMoodlecourses for pedagogical exchange management. The project paid special attention to authentic communication practice in the foreign language, intercultural experience, collaborative knowledge discovery in CLIL contexts as well as learning diversity and pedagogical differentiation.
The project ran from 2016 to 2019 and it was coordinated byUtrecht University, the Netherlands, alongside five other project partners: LINK – Linguistik und Interkulturelle Kommunikation (Germany),University of Roehampton(United Kingdom),University of Antwerp(Belgium),University of Valencia(Spain), Transit-Lingua (the Netherlands).
It is widely recognized that teacher facilitation plays a key role in ensuring the success of virtual exchange partnerships.[75][76]Teacher-training to integrate successful virtual exchange practice into the classroom has therefore also emerged as a growing trend.[77]Some scholars have advocated for anexperiential model approachto training which involves trainee teachers in online exchanges themselves before integrating virtual exchange practice in the classroom.[78]Reports have shown that this approach has impacted positively on successful integration of virtual exchange practice.[79]
The types of tasks in virtual exchange partnerships have also become more structured over time. Research shows that the type of task chosen for the virtual exchange plays an important role in the success of learning outcomes.[49][80]In earlier telecollaborative projects, the expectation was for partners to develop linguistic and cultural competence by simply connecting with partners of their target language.[81]Exchanges were carried out with little reflection on a participant's own or the target culture.[82]An approach that has therefore been suggested to engage and structure the partnership is atask-based language learning approach[83]which focuses on meaning-oriented activities that reflect the real world.[84]
Among other developments in virtual exchange practice, cross-disciplinary telecollaborative initiatives have seen a steady growth.[68]These partnerships not only enable language skills development and enhanceintercultural competencebut they also enable different cultural perspectives on certain subject areas such as music, history, anthropology, geography education, business studies, community health nursing, and other subjects.[85][86]
Collaborative Online International Learning Network (COIL) created by the State University of New York (SUNY) system is an example of a structured initiative that geographically connects distant partner classes for subject-specific collaboration through online and blended courses.[87]
Some of the benefits of virtual exchange include global competency, project-based learning, digital literacy, and intercultural collaboration. Other educators have found that COIL can be an important internationalization initiative in equity that grants access to global and digital learning to all students who may not perform physical mobility because of obstacles due to immigration issues or significant obligations.[88]Research has also shown that such globalized curriculums positively affect the employment status and wage of minority immigrant graduates.[89]
Online intercultural exchange is an academic field of study connected to virtual exchange. It "involves instructionally mediated processes...for social interaction between internationally distributed partner classes".[90]This activity has its roots incomputer-assisted language learning(CALL) andcomputer-mediated communication. OIE is not restricted to language learning but happens across many educational disciplines where there is a desire to increase the internationalization of teaching and learning.
Developments in communication technologies and the relative ease with which forms of human communication can be technically afforded internationally since the existence of the internet resulted in language teaching experimentation.[91]Connecting individuals, classrooms or groups of students to work together on tasks online involves attempting to arrive at shared understanding through "negotiation of meaning".[92]There is a body of research in the failures and successes of the endeavour which have informed a guide to language teacher practice.[93]The INTENT consortium of researchers, supported by funding from the European Union, promoted awareness of telecollaborative activities in higher education and the contribution made to internationalising the student experience, publishing a report[94]and a position paper. The history of the evolution of this field was described by researcher Robert O'Dowd in his keynote to the European Computer-Assisted Language Learning ConferenceEUROCALLin 2015. Publications reveal learner perceptions of such activity.[95]
Virtual exchange is just one way of usingtechnology in education. However, there is some confusion around the terminology used in this field. Virtual exchange is notdistance learning, nor should it be confused withvirtual mobilitywhich is more concerned with university students accessing and obtaining credit for taking online courses at universities other than their own. Virtual exchanges are notmassive open online courses(MOOCs), because they are not massive. In virtual exchange participants interact in small groups, often using synchronous video conferencing tools.
|
https://en.wikipedia.org/wiki/Telecollaboration
|
Virtual exchange(also referred to asonline intercultural exchangeamong other names) is an instructional approach or practice for language learning. It broadly refers to the "notion of 'connecting' language learners in pedagogically structured interaction and collaboration"[1]throughcomputer-mediated communicationfor the purpose of improving their language skills, intercultural communicative competence,[2]anddigital literacies.[3]Although it proliferated with the advance of the internet andWeb 2.0technologies in the 1990s, its roots can be traced to learning networks pioneered byCélestin Freinetin 1920s[4]and, according to Dooly,[5]even earlier in Jardine's[who?]work with collaborative writing at theUniversity of Glasgowat the end of the 17th to the early 18th century.
Virtual exchange is recognized as a field ofcomputer-assisted language learningas it relates to the use of technology in language learning. Outside the field of language education, this type of pedagogic practice is being used to internationalize the curriculum and offer students the possibility to engage with peers in other parts of the world in collaborative online projects.[6]
Virtual exchange is based on sociocultural views of learning inspired byVygotskiantheories of learning as a social activity.[7]
Different names have been used to describe the practice, ranging from terms that usually describe a particular practice within the area, such as dual language virtual exchange which is sometimes calledteletandem, eTandem, and tandem language learning, to more generic terms such as globally virtual connections, online interaction and exchange, online intercultural exchange, online exchange, virtual exchange, virtual connections, globalvirtual teams, globally-networked learning environments, collaborative online international learning (COIL), Internet-mediated intercultural foreign language education. globally networked learning, telecollaboration, and telecollaboration 2.0.[8]Currently, it appears thatvirtual exchangeis the most prominent umbrella term, a term that can be used for a variety of models and practices.[9][8]
Likewise, depending on the aims and settings a variety of definitions have been applied to the practice. One of the most widely referenced definitions comes from Julie Belz,[who?]who defines it as a partnership in which "internationally-dispersed learners in parallel language classes use Internet communication tools such as e-mail, synchronous chat, threaded discussion, and MOOs (as well as other forms of electronically mediated communication), in order to support social interaction, dialogue, debate, and intercultural exchange."[10]As the practice is most common in language learning contexts, narrower definitions appeared as well, such as an "Internet-based intercultural exchange between people of different cultural/national backgrounds, set up in an institutional context with the aim of developing both language skills and intercultural communicative competence... through structured tasks."[11]
Conversely, broader definitions that go beyond educational contexts emerged as well, such as "the process of communicating and working together with other people or groups from different locations through online or digital communication tools (e.g., computers, tablets, cellphones) to co-produce a desired work output. Telecollaboration can be carried out in a variety of settings (classroom, home, workplace, laboratory) and can be synchronous or asynchronous."[5]
The origins of virtual exchange have been linked to the work of iEARN and the New York/Moscow Schools Telecommunications Project[12](NYS-MSTP) which was launched in 1988 by Peter Copen and the Copen Family Fund. This project stemmed from a perceived need to connect youth from the two countries during a time which was marked by tensions between theUnited Statesand theU.S.S.R.that had developed during theCold War. With the institutional support of the Academy of Sciences in Moscow, and the New York State Board of Education, a pilot programme between 12 schools in each nation was established. Students worked in both English and Russian on projects based on their curricula, which had been designed by participating teachers. The program expanded in the early 1990s to include China, Israel, Australia, Spain, Canada, Argentina, and the Netherlands. The early 1990s saw the establishment of the organization iEARN which became officially established in 1994. One of the earliest projects, which is still running, was Margaret Riel's Learning Circles.[13]The organization has since expanded and is currently active in over 100 countries and promotes many different projects, also in collaboration with other organizations such asThe My Hero Project. This form of education which aims to integrate awareness of international communities as part of the curriculum is sometimes referred to asglobal education.
In foreign language education the practice of virtually connecting learners is also commonly called Virtual Exchange but has sometimes been known astelecollaborationand is a sub-field ofcomputer-assisted language learning(CALL). It was first promoted as a form of network-based language learning in the 1990s through the work of educators such asMark Warschauer[14][15]and Rick Kern. One of the first uses of the word telecollaboration was in Warschauer's 1996 volume[16]that compiled works oncomputer-mediated communication(CMC) following the Symposium on Local and Global Electronic Networking in Foreign Language Learning and Research held at theUniversity of Hawaiʻiin 1995. The symposium brought together educators concerned with these issues from university and secondary education throughout the world. Telecollaborative practices at the time involved the use of e-mail and other Web 1.0 capabilities.[17]
Several different models of virtual exchange / telecollaboration have since been developed,[18]such as the Cultura model, developed in 1997 atMITin the United States,[19]and the dual language virtual exchange / eTandem model.[20][21]The Cultura project[22]was originally developed as a bilingual project for French and English, but has since been developed in several different languages.
In 2003 the organization Soliya was founded byLucas Welchand Liza Chambers in the aftermath of theSeptember 11 attacks. Soliya's Connect Program has become an important model of online facilitated dialogue and is based on principles ofintergroup dialogueandpeacebuilding. In this model of virtual exchange, students from universities across the globe are placed in diverse groups of 10–12 people, and they meet regularly for 2-hour sessions of dialogue through an over a period of 8 weeks. Each group is supported by one or two trained facilitators.
In 2004 the International Virtual Exchange Project (IVEProject) was begun by Eric Hagley. In 2015 this project was sponsored by a Japanese government kaken[23]grant which allowed it to expand greatly. The project has students studying foreign languages under the tutelage of teachers interact asynchronously and synchronously using the language they are studying. The most common exchange is done in English as the Lingua Franca. As of August 2024, over 60,000 students from 29 countries have participated.
In 2005 the European Commission established theeTwinningprogramme for schools. The programme promotes projects between schools in Europe which entail both synchronous and asynchronous collaborations between classes, offering a safe platform for staff (teachers, head teachers, librarians, etc.) working in a school in one of the European countries involved. Teachers who register in eTwinning are checked by the National Support Organisation (NSO) and are validated in order to use all eTwinning features such as TwinSpace and Project Diary, providing a safe andGDPRcompliant environment for students and teachers' interaction.
As per 2021, there are 122,134 active projects with 937,761 teachers in 217,830 schools in eTwinning countries as well as e-Twinning plus countries (Armenia,Azerbaijan,Georgia,Jordan,Lebanon,Republic of MoldovaandUkraine). A key element of eTwinning is collaboration among teachers, students, schools, parents, and local authorities. In eTwinning teachers organize activities that enable young learners to engage in communicative interaction with peers from other linguistic and cultural backgrounds in order to practise and further develop their intercultural communicative competence in their respective foreign (target) language. Students have an active role in co-creating the learning experience by interacting, investigating and making decisions whilst respecting each other thus learning 21st century skills. The use of TwinSpace facilitates a multimodal approach to collaboration which integrates tools to ensure communicative and pedagogic diversity and richness. eTwinning has established a strong community of teachers and organizes training for them.[24]
In 2006 theSUNYCenter for Collaborative Online International Learning (COIL) was established at SUNY'sPurchase College.[25]COIL developed from the work of faculty members who used technology to bring international students into their classrooms using technology. COIL's Founding Director was Jon Rubin, a film and new media professor at Purchase College. The COIL model is increasingly being recognized as a way for universities to internationalize their curricula.[26][27]In 2010 COIL joined the new SUNY Global Center in New York City and continued to expand its global network.
In 2011 the Virtual Exchange Coalition[28]was established in the united states to further the field of virtual exchange, bringing together important virtual exchange providers.[29]
The 1st International Conference on Telecollaboration in University Foreign Language Education was hold at the University of León in February, 2014.[30]It provided a broad overview of linguistic and intercultural telecollaboration and generated interest in how telecollaboration can contribute to general educational goals and digital literacies in higher education.[30]
In 2016 members of the INTENT consortium working in different disciplines in higher education around the globe launched the UNICollaboration platform at the Second Conference on Telecollaboration in Higher Education at Trinity College, Dublin. The aim was to support university educators and mobility coordinators to find partner classes, and to organise and run online intercultural exchanges for their students. This platform was one of the outputs of an EU-funded project and has over 1000 registered educators.
In 2016 theEuropean Commissionerfor Education, Culture, Youth and SportTibor Navracsicsannounced a futureErasmus+Virtual Exchange initiative. In March 2018 the Erasmus+ Virtual Exchange pilot project was officially launched by CommissionerNavracsicsand it targeted young people (aged 18–30) in EU andSouthern Mediterraneancountries. In the initial year of EVE, 7,450 participants were involved in virtual exchange[31]through different activities, each with several subprogrammes. An impact report[31]was published in 2018 evaluating the Erasmus+ Virtual Exchange project activities which ran from 1 January 2018 to 31 December 2018, and the effectiveness of the different models of Virtual Exchange in meeting the objectives set by theEuropean Commission(EC). The initiative is hosted on theEuropean Youth Portal. Different models of virtual exchange are promoted on the platform as well as training for educators to develop their own virtual exchange projects and training for young people to become Erasmus+ Virtual Exchange facilitators.
Several VE projects under Erasmus+ Key Action 3 (Support for policy reform, Priority 5, EACEA 41/2016) have focused since then intelecollaborationand virtual exchange practice and research. An example is the EVOLVE project (Evidence-Validated Online Learning through Virtual Exchange) which promotes virtual exchange as an innovative form of collaborative international learning across disciplines inhigher education(HE) institutions inEuropeand beyond. The project investigated the impact of virtual exchange on teachers' pedaogogical competences and pedagogical approach in HE from 1 January 2018 to 31 December 2020 and it was coordinated by theUniversity of Groningen,the Netherlands.[32][33][34]
In 2018, several higher education institutions active in the field of virtual exchange and an international virtual exchange coalition was created that started organizing international virtual exchange conferences (IVEC). The first such conference was scheduled for October 2019 inTacoma, WA, US. This inaugural IVEC 2019 conference, entitled "Advancing the field of online international learning", was co-organized by the SUNY COIL Center,DePaul University,Drexel University, East Carolina University, University of Washington Bothell, University of Washington Tacoma, and UNIcollaboration.
Guth and Helm (2010)[35]built on the pedagogy of telecollaboration by expanding on its traditional practices via incorporatingWeb 2.0tools in online collaborative projects. This enriched practice widely became known as telecollaboration 2.0.[35][36]Telecollaboration 2.0, being a completely new phase,[37]serves to achieve nearly the same goals of telecollaboration. A distinctive feature of telecollaboration 2.0, however, lies in prioritizing promoting the development and mastery of new online literacies.[35]Although telecollaboration and telecollaboration 2.0 are used interchangeably, the latter slightly differs in affording "a complex context for language education as it involves the simultaneous use and development" of intercultural competencies,[38]internationalizing classrooms and promoting authentic intercultural communication[38]among partnering schools and students.[35]
There are several different 'models' of Virtual Exchange / telecollaboration which have been extensively described in the literature.[21]The first models to be developed were based on the partnering of foreign language students with "native speakers" of the target language, usually by organizing exchanges between two classes of foreign language students studying one another's languages. The most well established models are the lingua franca virtual exchange, dual language virtual exchange / eTandem, the Cultura, andeTwinningmodels.
Lingua Franca models have developed due to the increased use of particular languages in international business, science and politics. The most common are the English as Lingua Franca Virtual Exchange (ELFVE) and Spanish as Lingua Franca Virtual Exchange (SLFVE).[39][40]This type of virtual exchange is now widely adopted by teachers who want their students to participate in international communities but know their students, who often do not have the socio-economic opportunities to do so, have little chance of physically traveling outside their regions. Teachers create tasks and opportunities for their students to use the language they are studying to interact with and learn from peers in other countries. Though not included in the table below the strengths of the LFVE model are that students are in language partnerships without power imbalances, are able to interact with peers from multiple different cultures, see the language they are studying from the perspective of others who are studying it and have more opportunities to use the language they are studying in real-world situations. Some believe the weakness of this model is that students see peers who are also studying the language as not being "ideal" and that there is the possibility of learning mistaken language use.
Dual language virtual exchange (DLVE) / eTandem, which developed from the face to face tandem learning approach, has been widely adopted by individual learners who seek partners on the many available educational websites which offer to help find partners and suggest activities for tandem partners to engage in. However, the DLVE / eTandem model has also been used for class-to-class telecollaboration projects where teachers establish specific objectives, tasks, and topics for discussion.[21]Theteletandemmodel[41]is based on DLVE and was developed in Brazil, but focuses on oral communication throughVOIPtools such as Skype and Google Hangouts. Until recent years, however, virtual exchange has had to use asynchronous communication tools.
The Cultura project was developed by teachers of French as a foreign language atMITin the late 1990s with the aim of making culture the focus of their foreign language class.[19]This model takes its inspiration from the words of the Russian philosopherMikhail Bakhtin: "It is only in the eyes of another culture that foreign culture reveals itself fully and profoundly ... A meaning only reveals its depths once it has encountered and come into contact with another foreign meaning" (as cited in Furstenberg, Levet, English, & Maillet, 2001, p. 58). Cultura is based on the notion and process of cultural comparison and entails students analysing cultural products in class with their teachers and interacting with students of the target languages and cultures through which they develop a deeper understanding of each other's culture, attitudes, representations, values, and frames of reference.[42]
The eTwinning project, which essentially is a network of schools and educators within the European Union and part of Eramus+, contrasts with its earlier counterparts in not setting specific guidelines apropos of language use, themes or structure.[43]This model serves as a broad platform for schools within the EU to exchange information and share materials online, and provides a virtual space for countless pedagogical opportunities where teachers and students collectively learn, communicate and collaborate using a foreign language.[43]Quintessentially, eTwinning has the following four objectives:
eTwinning has thus proven to be a strong model for telecollaboration in recent years, since it enables the authentic use of foreign language among virtual partners, i.e. teachers and students. Not surprisingly, eTwinning projects have become increasingly recognized at various educational institutions across the continent. Each of the telecollaborative models discussed above has its strengths and weaknesses:
Virtual exchange is a type of education program that uses technology to allow geographically separated people to interact and communicate. This type of activity is most often situated in educational programs (but is also found in some youth organizations) in order to increase mutual understanding,global citizenship, digital literacies, and language learning. Models of virtual exchange are also known astelecollaboration, online intercultural exchange, globally networked teaching and learning, collaborative online international learning (COIL). Non-profit organizations such as Soliya (founded byLucas Welch) and the Sharing Perspectives Foundation have designed and implement virtual exchange programs in partnership with universities and youth organizations.
In 2017 the European Commission celebrated 30 years of Erasmus mobility[47]and declared Erasmus+ as its most successful programme in terms of European integration and international outreach. In 2018, the Erasmus+ Virtual Exchange (EVE)[48]project was launched; a pilot project part of the Erasmus+ programme, with the aim to provide technology-led intercultural learning experiences for young people aged 18–30 in youth organisations and universities Europe and Southern Mediterranean countries.
Educational institutions such asState University of New York's COIL Center andDePaul Universityuse virtual exchange in higher education curricula to connect young people globally with a primary mission to help them grow in their understanding of each other's contexts (society, government, education, religion, environment, gender issues, etc.).
The complexities of the objectives of virtual exchange / telecollaboration ("telecollaborative tasks can and should integrate the development of language,intercultural competence, andonline literacies"[49]) can generate a series of challenges for educators and learners. O'Dowd and Ritter[50]categorized potential reasons for failed communication in telecollaborative projects, sub-dividing them into four levels which, as the researchers indicate, can also overlap and interrelate:
O'Dowd and Ritter[50]focus initially on the individual level of possible obstacles to full functionality in virtual exchange projects, specifically thepsychobiographicaland educational backgrounds of the virtual exchange partners as potential sources for dysfunctional communications, and in particular, on the following two primary aspects:
The concept ofintercultural communicative competence(ICC) was established by Byram[51]who stated that there are five dimensions (or '5 savoirs') that make an individual interculturally competent: a combination of skills of interpreting, relating, discovery and interaction, of attitudes, knowledge and critical awareness. Learners who embark on a virtual exchange project with immature intercultural communicative competences may struggle to carry out the tasks usefully.[citation needed]
Dissonance in terms of motivation, commitment levels and expectations are also potential sources of tension for learning partners. For example, long response times can be interpreted as a lack of interest, or short responses as unfriendliness.[52]
Solid teacher partnerships are essential to the success of virtual exchange and ideally should be constructed before the students embark on the project. According to O'Dowd and Ritter,[50]virtual exchange can be viewed as "a form of virtual team teaching which demands high levels of communication and cooperation with a partner whom they may not have met face to face". Furthermore, since virtual exchange has been devised as a vehicle both for linguistic and intercultural communication, educators as much as students must learn to be 'intercultural speakers' (Byram)[51]and avoid culturally inappropriate behaviors, typecasting, culture clashes and misunderstandings.
Teachers will be aware of the curricular needs of their own institution, however these are unlikely to match exactly the requirements of their partner institute. The themes and sequencing of the tasks must, therefore, be the result of a compromise which satisfies the curricular needs of both sides. Reaching compromises necessarily implies that the partners be willing to invest time and energy in the demands of planning, and that they are sensitive to the needs of others.[50]
Successful pair and group formation is crucial to successful dual language virtual exchange and, to a lesser extent, lingua franca virtual exchange, however factors such as age, gender or foreign language proficiency can impact projects substantially, leading to the difficult choice between leaving pairings and groupings to chance, or assigning partners according to a rationale, however challenging foreseeing compatibilities and incompatibilities might be.[50]
In virtual exchange projects, most of the attention tends to be focused on the online relationships, with the consequent risk of neglecting the local group. The local group is the context within which communication, interaction, negotiation and, thus, a large part of thelearning processtake place. Consequently, these relationships also require teacher guidance and monitoring.[50]
A comprehensive preparatory phase is an essential element in effective virtual exchange / telecollaborative projects. If teachers can forewarn learners of issues which may arise, they will be better equipped to deal with them and to protect the quality of the exchange. Potentially problematic areas include technical problems, a lack of information about one's partners and their environments, as well as partners' expectations not matching.[50]
Both the types of available technological tools and access to them can impact the relationship between partners. More sophisticated technological tools on one side can make less well-equipped virtual exchange partners feel at a disadvantage. Moreover, restrictions in accessibility can limit opportunities for partners to interact, with repercussions which can include the risk of giving the false impression of disinterest when a learner with limited technological access is less responsive than a partner who has unlimited access.[50]
O'Dowd and Ritter[50]include in their list of socio-institutional challenges the organization of the learners' general course of studies, and refer to Belz and Müller-Hartmann's[53]identification of four key areas which can influence the outcome of virtual exchanges / telecollaborations:
These differences can greatly affect the outcome of a project, as they can generate differing expectations regarding the volume of work, the meeting of deadlines, and so forth. O'Dowd and Ritter[50]also indicate the pairing of students whose main focus of academic interest may not be the same as a possible source of dysfunction, in addition to the impact of clashes of institutional policies and philosophies regulating all aspects of the learning and teaching processes.
Insociolinguistics, the concept ofprestigerefers to the regard accorded certain languages or forms of the same language, such asdialects. Since virtual exchange involvesintercultural communicative competencesas much as purely linguistic skills, O'Dowd and Ritter[50]remind us that virtual exchange / telecollaborative interactions can be negatively affected by prestige-based attitudes both to language and culture, which in turn can lead to therankingof one language and culture over the other, with repercussions on the virtual exchange partnership.
For dual language virtual exchange, a related problem is how native speakers of English tend to be presented as "language experts", while non-native speakers are "learners" who need exposure to native English. This approach assumes that native English will facilitate mutual understanding in most types of communicative situations. However, the use of idiomaticBritish EnglishorAmerican Englishmight cause several comprehension problems to non-native speakers who typically useEnglish as a lingua franca(ELF), a different function of English.
While non-native speakers will try to accommodate native speakers, native speakers should also try to understand how ELF works and accommodate non-native speakers.[54]Native speakers might be experts in idiomatic English (American, British,Australian, etc.), but non-native speakers are also experts when it comes to using ELF. The two groups of speakers can certainly learn from one another.
When two groups of students participate in dual language virtual exchange projects, it is wise to avoid positioning native speakers as authoritative language experts whose main role is to coach or tutor non-native speakers.
At this level, cultural differences relative to communicative behaviors, such as attitudes tosmall talk, can cause misunderstanding and impact virtual exchanges. According to O'Dowd and Ritter[50]these interactional divergences can occur within the following communicative domains:
Virtual exchange has evolved and become more diversified to reflect not only emergingpedagogiesand technologies over time but it has also adapted to reflect the changing globalized world. It is becoming recognized as a sustainable approach[56]toglobal citizenship educationand a form of 'internationalization at home'[57]
A considerable amount of research points to the benefits of virtual exchange or telecollaboration partnering. Not only do these partnerships improvelinguistic competence,[58][59][60]they also develophigher-order thinking skills[61]and contribute to the development of cross-cultural attitudes, knowledge, skills, and awareness.[62]Moreover, virtual exchange activities developdigital literacies[63]as well variousmultiliteracies.[49]
Recent years have also witnessed the emergence of partners using a foreign language such as English not only with native speakers, but also with other non-native speakers as alingua francain various virtual exchanges. Studies reveal that these virtual exchanges have equally produced positive results in terms of skills development.[64][65]
While integration of and research into various virtual exchange partnerships have mainly occurred at universities, what is also emerging is an exploration of virtual exchange integration into secondarylanguage education.[66]
O'Dowd and Lewis[67]report that initially, the majority of online exchanges occurred between Western classrooms based in North America and Europe, while the number of partnerships involving other continents and other languages remained small. This has changed since the development of projects such as iEarn and the International Virtual Exchange Project which includes teachers and students from non-western countries.
A trend that can be observed is that two models have generally guided the approaches adopted in virtual exchange or telecollaborative practice in foreign language learning. The first model, known as dual language virtual exchange or e-tandem,[20][21]focuses primarily on linguistic development which generally involves two native speakers of different languages communicating with each other to practise their target language.[68]These partners perform the role of peer-tutors providing feedback to each other and correcting errors in a digital environment. This model also emphasizeslearner autonomywhere partners are encouraged to take responsibility for creating the structure to the language exchange with minimal intervention from the teacher[68]
The second model, generally referred to as intercultural virtual exchange / telecollaboration, emerged with the pedagogical trends of 1990s and 2000s which placed more emphasis on intercultural and sociocultural elements of foreign language learning. This model differs from dual language virtual exchange / e-tandem in 3 ways:[68]
By the end of the 2010s, virtual exchange witnessed a move towards the integration of more informal immersive online environments andWeb 2.0 technologies. These tools and environments enabled partners to conduct collaborative tasks reflecting hobbies and interests such as jointly developed music or film projects.[68]Other joint tasks involve website design and development[69][70]as well as online games and discussion forums.[71]Four major types of technologies dominating virtual exchange practice have been identified by O'Dowd and Lewis:[67]
The multitude and array of environments have thus provided greater freedom of choice for intercultural virtual exchange partners.[68]Thorne[72]argues that although these may be considered motivating environments, they involve 'intercultural communication in the wild' and are 'less controllable' as a result (p. 144).
The introduction of more structured approaches and frameworks have therefore been witnessed as a trend since the 2010s.[68]The outcome of the INTENT project by the European Commission between 2011 and 2014 led to the creation of the UNICollaboration platform[73]which provides necessary resources for educators to set up structured virtual exchange partnerships in universities. The European Telecollaboration for Intercultural Language Acquisition (TILA)[74]is an example of a platform of resources for teachers dedicated to integrating structured virtual exchange programs into secondary education. The aim of the European project Telecollaboration for Intercultural Language Acquisition (TILA) was to improve the quality of foreign language teaching and learning processes by means of meaningful telecollaboration among peers. The TILA project was funded by the European Commission within the Lifelong Learning Programme (2013–2015) and continues since then. Six countries were represented in the TILA consortium:France,United Kingdom,Germany,Spain, theNetherlandsandCzech Republic, and each country collaborated with a secondary school and a (teacher training) university.
TeCoLa was also a project funded by the European Commission within theErasmus+programme that harnesses telecollaboration technologies and gamification for intercultural and content integrated foreign language teaching (CLIL). It addressed the emerging need in secondary foreign language education for developing intercultural communicative competence through the pedagogical integration of virtual exchanges and telecollaboration. TeCoLa deployedvirtual worlds,videoconferencingtools andgamificationto support virtual pedagogical exchanges between secondary school students throughout Europe. The TeCoLa tools include the TeCoLa Virtual World,BigBlueButtonvideo rooms, online tools for communication and collaboration, andMoodlecourses for pedagogical exchange management. The project paid special attention to authentic communication practice in the foreign language, intercultural experience, collaborative knowledge discovery in CLIL contexts as well as learning diversity and pedagogical differentiation.
The project ran from 2016 to 2019 and it was coordinated byUtrecht University, the Netherlands, alongside five other project partners: LINK – Linguistik und Interkulturelle Kommunikation (Germany),University of Roehampton(United Kingdom),University of Antwerp(Belgium),University of Valencia(Spain), Transit-Lingua (the Netherlands).
It is widely recognized that teacher facilitation plays a key role in ensuring the success of virtual exchange partnerships.[75][76]Teacher-training to integrate successful virtual exchange practice into the classroom has therefore also emerged as a growing trend.[77]Some scholars have advocated for anexperiential model approachto training which involves trainee teachers in online exchanges themselves before integrating virtual exchange practice in the classroom.[78]Reports have shown that this approach has impacted positively on successful integration of virtual exchange practice.[79]
The types of tasks in virtual exchange partnerships have also become more structured over time. Research shows that the type of task chosen for the virtual exchange plays an important role in the success of learning outcomes.[49][80]In earlier telecollaborative projects, the expectation was for partners to develop linguistic and cultural competence by simply connecting with partners of their target language.[81]Exchanges were carried out with little reflection on a participant's own or the target culture.[82]An approach that has therefore been suggested to engage and structure the partnership is atask-based language learning approach[83]which focuses on meaning-oriented activities that reflect the real world.[84]
Among other developments in virtual exchange practice, cross-disciplinary telecollaborative initiatives have seen a steady growth.[68]These partnerships not only enable language skills development and enhanceintercultural competencebut they also enable different cultural perspectives on certain subject areas such as music, history, anthropology, geography education, business studies, community health nursing, and other subjects.[85][86]
Collaborative Online International Learning Network (COIL) created by the State University of New York (SUNY) system is an example of a structured initiative that geographically connects distant partner classes for subject-specific collaboration through online and blended courses.[87]
Some of the benefits of virtual exchange include global competency, project-based learning, digital literacy, and intercultural collaboration. Other educators have found that COIL can be an important internationalization initiative in equity that grants access to global and digital learning to all students who may not perform physical mobility because of obstacles due to immigration issues or significant obligations.[88]Research has also shown that such globalized curriculums positively affect the employment status and wage of minority immigrant graduates.[89]
Online intercultural exchange is an academic field of study connected to virtual exchange. It "involves instructionally mediated processes...for social interaction between internationally distributed partner classes".[90]This activity has its roots incomputer-assisted language learning(CALL) andcomputer-mediated communication. OIE is not restricted to language learning but happens across many educational disciplines where there is a desire to increase the internationalization of teaching and learning.
Developments in communication technologies and the relative ease with which forms of human communication can be technically afforded internationally since the existence of the internet resulted in language teaching experimentation.[91]Connecting individuals, classrooms or groups of students to work together on tasks online involves attempting to arrive at shared understanding through "negotiation of meaning".[92]There is a body of research in the failures and successes of the endeavour which have informed a guide to language teacher practice.[93]The INTENT consortium of researchers, supported by funding from the European Union, promoted awareness of telecollaborative activities in higher education and the contribution made to internationalising the student experience, publishing a report[94]and a position paper. The history of the evolution of this field was described by researcher Robert O'Dowd in his keynote to the European Computer-Assisted Language Learning ConferenceEUROCALLin 2015. Publications reveal learner perceptions of such activity.[95]
Virtual exchange is just one way of usingtechnology in education. However, there is some confusion around the terminology used in this field. Virtual exchange is notdistance learning, nor should it be confused withvirtual mobilitywhich is more concerned with university students accessing and obtaining credit for taking online courses at universities other than their own. Virtual exchanges are notmassive open online courses(MOOCs), because they are not massive. In virtual exchange participants interact in small groups, often using synchronous video conferencing tools.
|
https://en.wikipedia.org/wiki/Virtual_exchange
|
Virtual worldsare playing an increasingly important role in education, especially inlanguage learning. By March 2007 it was estimated that over 200 universities or academic institutions were involved inSecond Life(Cooke-Plagwitz, p. 548).[1]Joe Miller, Linden Lab Vice President of Platform and Technology Development, claimed in 2009 that "Language learning is the most common education-based activity in Second Life".[2]Many mainstream language institutes and private language schools are now using 3Dvirtual environmentsto support language learning.
Virtual worldsdate back to the adventure games and simulations of the 1970s, for exampleColossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. These early adventure games and simulations led toMUDs(Multi-user domains) andMOOs(Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding (Shield 2003).[3]
Three-dimensional virtual worlds such asTravelerandActive Worlds, both of which appeared in the 1990s, were the next important development.Travelerincluded the possibility of audio communication (but not text chat) between avatars represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use ofActive Worldsas an arena for constructivist learning.[4]TheAdobe Atmospheresoftware platform was also used to promote language learning in the Babel-M project (Williams & Weetman 2003).[5]
The 3D world ofSecond Lifewas launched in 2003. Initially perceived as anotherrole-playing game(RPG), it began to attract the attention of language teachers. 2005 saw the first large-scale language school,Languagelab.com, open its doors in Second Life. By 2007, Languagelab.com's customVoIP(audio communication) solution was integrated with Second Life. Prior to that, teachers and students used separate applications for voice chat.[6]
Many universities, such as Monash University,[7]and language institutes, such asThe British Council,Confucius Institute,Instituto Cervantesand the Goethe-Institut,[8]have islands in Second Life specifically for language learning. Many professional and research organisations support virtual world language learning through their activities in Second Life.EUROCALLandCALICO, two leading professional associations that promote language learning with the aid of new technologies, maintain a joint Virtual Worlds Special Interest Group (VW SIG) and a headquarters in Second Life.[9]
Recent examples of creating sims in virtual worlds specifically for language education include VIRTLANTIS, which has been a free resource for language learners and teachers and an active community of practice since 2006,[10]the EU-funded NIFLAR project,[11]the EU-funded AVALON project,[12]and the EduNation Islands, which have been set up as a community of educators aiming to provide information about and facilities for language learning and teaching.[13]NIFLAR is implemented both in Second Life and inOpenSim.[14]Numerous other examples are described by Molka-Danielsen & Deutschmann (2009),[15]and Walker, Davies & Hewer (2012).[16]
Since 2007 a series of conferences known as SLanguages have taken place, bringing together practitioners and researchers in the field of language education in Second Life for a 24-hour event to celebrate languages and cultures within the 3D virtual world.[17]
With the decline of second life due to increasing support for open source platforms[18]many independent language learning grids such as English Grid[19]and Chatterdale[20]have emerged.
Almost all virtual world educational projects envisage ablended learningapproach whereby the language learners are exposed to a 3D virtual environment for a specific activity or time period. Such approaches may combine the use of virtual worlds with other online and offline tools, such as 2D virtual learning environments (e.g.Moodle) or physical classrooms. SLOODLE. for example, is an open-source project which integrates the multi-user virtual environments of Second Life and/orOpenSimwith the Moodle learning-management system.[21]Some language schools offer a complete language learning environment through a virtual world, e.g.Languagelab.comandAvatar Languages.
Virtual worlds such as Second Life are used for theimmersive,[22]collaborative[23]and task-based, game-like[24]opportunities they offer language learners. As such, virtual world language learning can be considered to offer distinct (although combinable) learning experiences.
The "Six learnings framework" is a pedagogical outline developed for virtual world education in general. It sets out six possible ways to view an educational activity.[28]
3D virtual worlds are often used forconstructivistlearning because of the opportunities for learners to explore, collaborate and be immersed within an environment of their choice. Some virtual worlds allow users to build objects and to change the appearance of their avatar and of their surroundings.[31]Constructivist approaches such astask-based language learningandDogmeare applied to virtual world language learning because of the scope for learners to socially co-construct knowledge, in spheres of particular relevance to the learner.
Task-based language learning(TBLL) has been commonly applied to virtual world language education. Task-based language learning focuses on the use of authentic language and encourages students to do real life tasks using the language being learned.[32]Tasks can be highly transactional, where the student is carrying out everyday tasks such as visiting the doctor at the Chinese Island of Monash University in Second Life. Incidental knowledge about the medical system in China and cultural information can also be gained at the same time.[33]
Other tasks may focus on more interactional language, such as those that involve more social activities or interviews within a virtual world.
Dogme language teachingis an approach that is essentially communicative, focusing mainly on conversation between learners and teacher rather than conventional textbooks. Although Dogme is perceived by some teachers as being anti-technology, it nevertheless appears to be particularly relevant to virtual world language learning because of the social, immersive and creative experiences offered by virtual worlds and the opportunities they offer for authentic communication and a learner-centred approach.[34]
Virtual world WebQuests (also referred to as SurReal Quests[35]) combine the concept of 2D WebQuests with the immersive and social experiences of 3D virtual worlds. Learners develop texts, audios or podcasts based on their research, part of which is within a virtual world.
The concept of real-lifelanguage villageshas been replicated within virtual worlds to create a language immersion environment for language learners in their own country.[36]The Dutch Digitale School has built two virtual language villages, Chatterdale (English) and Parolay (French), for secondary education students on the OpenSim grid.[37]
Hundsberger (2009, p. 18)[38]defines a virtual classroom thus:
"A virtual classroom in SL sets itself apart from other virtual classrooms in that an ordinary classroom is the place to learn a language whereas the SL virtual classroom is the place to practise a language. The connection to the outside world from a language lab is a 2D connection, but increasingly people enjoy rich and dynamic 3D environments such as SL as can be concluded from the high number of UK universities active in SL."
To what extent a virtual classroom should offer only language practice rather than teaching a language as in a real-life classroom is a matter for debate. Hundsberger's view (p. 18) is that "[...] SL classrooms are not viewed as a replacement for real life classrooms. SL classrooms are an additional tool to be used by the teacher/learner."
Language learning can take place in public spaces within virtual worlds. This offers greater flexibility with locations and students can choose the locations themselves, which enables a more constructivist approach.
The wide variety of replica places in Second Life, e.g. Barcelona, Berlin, London and Paris, offers opportunities for language learning throughvirtual tourism. Students can engage in conversation with native speakers who people these places, take part in conducted tours in different languages and even learn how to use Second Life in a language other than English.
The Hypergrid Adventurers Club is an open group of explorers who discuss and visit many different OpenSim virtual worlds. By usinghypergridconnectivity, avatars can jump between completely different OpenSim grids while maintaining a singular identity and inventory.[39]
The TAFE NSW-Western Institute Virtual Tourism Project commenced in 2010 and was funded by the Australian Flexible Learning Framework's eLearning Innovations Project. It is focused on developing virtual worlds learning experiences for TVET Tourism students and located on the joycadiaGrid.[40]
Virtual worlds offer exceptional opportunities forautonomous learning. The videoLanguage learning in Second Life: an Introductionby Helen Myers (Karelia Kondor in SL) is a good illustration of an adult learner's experiences of her introduction to SL and in learning Italian.[41]
Tandem learning, or buddy learning, takes autonomous learning one step further. This form of learning involves two people with different native languages working together as a pair in order to help one another to improve their language skills.[42]Each partner helps the other through explanations in the foreign language. As this form of learning is based on communication between members of different language communities and cultures, it also facilitatesintercultural learning. A tandem learning group, Teach You Teach Me (Language Buddies), can be found in Second Life.
The termholodeckderives from theStar TrekTV series and feature films, in which a holodeck is depicted as an enclosed room in which simulations can be created for training or entertainment. Holodecks offer exciting possibilities of calling up a range of instantly available simulations that can be used for entertainment, presentations, conferencing and, of course, teaching and learning. For example, if students of hospitality studies are being introduced to the language used in checking in at a hotel a simulation of a hotel reception area can be generated instantly by selecting the chosen simulation from a holodeck "rezzer", a device that stores and generates different scenarios. Holodecks can also be used to encourage students to describe a scene or to even build a scene.[43]Holodecks are commonly used for a range of role-plays.[44]
Acave automatic virtual environment(CAVE) is an immersive virtual reality (VR) environment where projectors are directed to three, four, five or six of the walls of a room-sized cube. The CAVE is a large theatre that sits in a larger room. The walls of the CAVE are made up of rear-projection screens, and the floor is made of a down-projection screen. High-resolution projectors display images on each of the screens by projecting the images onto mirrors which reflect the images onto the projection screens. The user will go inside the CAVE wearing special glasses to allow the 3D graphics that are generated by the CAVE to be seen. With these glasses, people using the CAVE can actually see objects floating in the air, and can walk around them, getting a realistic view of what the object would look like when they walk around it.
O'Brien, Levy & Orich (2009) describe the viability of CAVE and PC technology as environments for assisting students to learn a foreign language and to experience the target culture in ways that are impossible through the use of other technologies.[45]
Immersion brought by virtual worlds is augmented withartificial intelligencecapabilities for language learning. Learners can interact with the agents in the scene using speech and gestures. Dialogue interactions with automatic interlocutors provide a language learner with access to authentic and immersive conversations to role-play and learn viatask-based language learningin a new immersive classroom that uses AI and VR.[46][47]
Earlier virtual worlds, with the exception ofTraveler(1996), offered only text chat. Voice chat was a later addition.[48]Second Life did not introduce voice capabilities until 2007. Prior to this, independentVoIPsystems, e.g.Ventrilo, were used. Second Life's current internal voice system has the added ability to reproduce the effect of distance on voice loudness, so that there is an auditory sense of space amongst users.[6]
Other virtual worlds, such asTwinity, also offer internal voice systems. Browser-based 3D virtual environments tend to only offer text-chat communication, although voice chat seems likely to become more widespread.[49]Vivox[50]is one of the leading integrated voice platform for the social web, providing a Voice Toolbar for developers of virtual worlds and multiplayer games. Vivox is now spreading into OpenSim at an impressive rate, e.g. Avination is offering in-world Vivox voice at no charge to its residents and region renters, as well as to customers who host private grids with the company.[51]English Grid began offering language learning and voice chat for language learners using Vivox in May, 2012.[52]
The advent of voice chat in Second Life in 2007 was a major breakthrough. Communicating with one's voice is thesine qua nonof language learning and teaching, but voice chat is not without its problems. Many Second Life users report on difficulties with voice chat, e.g. the sound being too soft, too loud or non-existent – or continually breaking up. This may be due to glitches in the Second Life software itself, but it is often due to individual users' poor understanding of how to set up audio on their computers and/or of inadequate bandwidth. A separate voice chat channel outside Second Life, e.g.Skype, may in such cases offer a solution.
Owning or renting land in a virtual world is necessary for educators who wish to create learning environments for their students. Educators can then use the land to create permanent structures or temporary structures embedded withinholodecks, for example the EduNation Islands in Second Life.[13]The land can also be used for students undertaking building activities. Students may also use public sandboxes, but they may prefer to exhibit their creations more permanently on owned or rented land.
Some language teaching projects, for example NIFLAR, may be implemented both in Second Life and inOpenSim.[14]
The Immersive Education Initiative revealed (October 2010) that it would provide free permanent virtual world land in OpenSim for one year to every school and non-profit organization that has at least one teacher, administrator, or student in attendance of any Immersive Education Initiative Summit.[53]
Many islands in Second Life have language- or culture-specific communities that offer language learners easy ways to practise a foreign language.[54]Second Life is the widest-used 3D world among members of the language teaching community, but there are many alternatives. General-purpose virtual environments such as Hangout and browser-based 3D environments such as ExitReality and 3DXplorer offer 3D spaces for social learning, which may also include language learning.Google Street ViewandGoogle Earth[55]also have a role to play in language learning and teaching.
Twinityreplicates the real life cities of Berlin, Singapore, London and Miami, and offers language learners virtual locations with specific languages being spoken. Zon has been created specifically for learners of Chinese.[56]English Grid[57]has been developed by education and training professionals as a research platform for delivering English language instruction using opensim.
OpenSimis employed as free open source standalone software, thus enabling a decentralized configuration of all educators, trainers, and users. Scott Provost, Director at the Free Open University, Washington DC, writes: "The advantage of Standalone is that Asset server and Inventory server are local on the same server and well connected to your sim. With Grids that is never the case. With Grids/Clouds that is never the case. On OSGrid with 5,000 regions and hundreds of users scalability problems are unavoidable. We plan on proposing 130,000 Standalone mega regions (in US schools) with Extended UPnP Hypergrid services. The extended services would include a suitcase or limited assets that would be live on the client".[58]Such a standalone sim offers 180,000 prims for building, and can be distributed pre-configured together with a virtual world viewer using a USB storage stick or SD card. Pre-configured female and male avatars can also be stored on the stick, or even full-sim builds can be downloaded for targeted audiences without virtual world experience. This is favorable for introductory users who want a sandbox on demand and have no clue how to get started.
There is no shortage of choices of virtual world platforms. The following lists describe a variety of different virtual world platforms, their features and their target audiences:
Virtual World Language Learning is a rapidly expanding field and it converges with other closely related areas, such as the use of MMOGs, SIEs and Augmented Reality Language Learning (ARLL).
MMOGs (massively multiplayer online games) are also used to support language learning, for example the World of Warcraft in School project.[68]
SIEs are engineered 3D virtual spaces that integrate online gaming aspects. They are specifically designed for educational purposes and offer learners a collaborative and constructionist environment. They also allow the creators/designers to focus on specific skills and pedagogical objectives.[69]
Augmented reality(AR) is the combination of real-world and computer-generated data so that computer generated objects are blended into real time projection of real life activities. Mobile AR applications enable immersive and information-rich experiences in the real world and are therefore blurring the differences between real life and virtual worlds. This has important implications for m-Learning (Mobile Assisted Language Learning), but hard evidence on how AR is used in language learning and teaching is difficult to come by.[70]
The main aim is to promote social integration among users located in the same physical space, so that multiple users may access to a shared space which is populated by virtual objects while remaining grounded in the real world. In other words, it means:
|
https://en.wikipedia.org/wiki/Virtual_world_language_learning
|
Social media language learningis a method of language acquisition that uses socially constructedWeb 2.0platforms such as wikis, blogs, and social networks to facilitate learning of the target language. Social media is used by language educators and individual learners that wish to communicate in the target language in a natural environment that allows multimodal communication, ease of sharing, and possibilities for feedback from peers and educators.
Proponents of social media language learning are likely to support the theory of language socialization developed bylinguistic anthropologistsElinor OchsandBambi Schieffelin[1]which claims that language learning is interwoven with cultural interaction and is mediated by linguistic and other symbolic activity.[1]Social media provides an environment that allows users to weave their goal of language acquisition with culturally relevant interactions through a wide array of available platforms that are often categorized as formal for classroom use and informal for personal use.
Educators can integrate social media tools into their existing pedagogy. Online environments used by education professionals includecourse management systems,wikis,blogs,virtual worlds, and more.[2]Pedagogical use of social media in a language classroom can take a myriad of forms, such as classroom blogs to discuss culturally relevant topics in the target language, social media apps with specialized platforms for classroom use, learning environments developed specifically for schools, and much more.
ForIndigenous languagesthat are vulnerable and critically endangered, social media among other digital technologies can offer access to supportive communities, experienced educators, and other learning opportunities.[3]Through pedagogical use of social media, educators and learners are given the opportunity to engage with Indigenous communities around the world, access a myriad of resources not previously available, and engage in their education in a new medium.[4]Social media language learning is especially pertinent for the Indigenous language classroom because being surrounded by and engaging in meaning conversations aligns with Indigenous cultural values of community.[5]However, classroom-based Indigenous language revitalization efforts have been criticized for failure to promote use and transmission of the language outside of an educational context.[5]
Social media is employed by language learners outside of traditional learning environments. Informal language learning through social media can occur through personal social network use, language learning apps with a social component, online gaming, fan communities, and more.
There are a myriad of language learning websites and apps that rely on social interaction between learners or have a social component. These resources have varying levels of social media elements. One example of an app with a low social element is the popular language learning appDuolingo, which allows users to share their progress and scores with other language learners within a largely independent learning platform.
There are also apps that have a social media foundation. The appTandemis an example of an app with a more demanding social aspect as an app that is designed specifically for language exchange with other learners.[6]Tandem and other similar apps allow users to work on their skills in the target language with other language learners and teachers through consistent communication via written messaging and audio phone calls.[6]In this way, these social language learning apps can facilitate language learning through real conversations with other community members.
Common social media platforms such asFacebookandTwitterare used by language learners to communicate with other learners and native speakers of the target language.[7][8]Many social networks include the ability to join virtual groups that either have other language learners as group members or group members with an external shared interest that they communicate about in the target language.
YouTubeis another social network that is commonly used by language educators and learners. There are many popular language education channels on YouTube that have a large number of followers that use the video-based platform to learn and interact with other users.[9]Language learners also use this platform to demonstrate their progress with a language, such as YouTuberEvan Edingerwho posted popular videos showcasing his knowledge of German as a foreign language.
Stan Twitteris a social network community within Twitter, in which the target language is used but is not the focus of the group. Research done on language acquisition in the Stan Twitter community found that language learning andmemediscourse learning happened naturally while engaging in the community's activities and interacting with other members.[10]Some members of online communities such as Stan Twitter are not interested in language learning but acquire the language regardless because of the requirement to use a specific language for communication with group members.
AMMO(massively multiplayer online) game can be used to facilitate language acquisition via their built-in chat functions that enable participants to chat with players that speak different languages. By participating in an interactive gaming experience, players have the opportunity to engage in the target language and help them gain an understanding of conversational norms and grammar constructions.[11]However, language use in video games is highly contextual and many video games use repetitive language that can limit a more holistic understanding of the target language.[12]Games transform the learning process from a passive task to one in which individuals engage actively in the experience of learning by focusing first on meaning. Computer games, researchers' argue, supply authentic environments for language learning, complete with ample opportunities for students to develop and test their emerging target language knowledge.[11]
|
https://en.wikipedia.org/wiki/Social_Media_Language_Learning
|
Computer-assisted language learning(CALL), known ascomputer-aided instruction(CAI) in British English andcomputer-aided language instruction(CALI) in American English,[1]Levy (1997: p. 1) briefly defines it as "the exploration and study of computer applications in language teaching and learning."[2]CALL embraces a wide range ofinformation and communications technology"applications and approaches to teaching and learning foreign languages, ranging from the traditional drill-and-practice programs that characterized CALL in the 1960s and 1970s to more recent manifestations of CALL, such as those utilizedvirtual learning environmentand Web-baseddistance learning. It also extends to the use ofcorpora and concordancers, interactive whiteboards,[3]computer-mediated communication (CMC),[4]language learning in virtual worlds, andmobile-assisted language learning (MALL).[5]
The term CALI (computer-assisted language instruction) was used before CALL, originating as a subset of the broader term CAI (computer-assisted instruction). CALI fell out of favor among language teachers, however, because it seemed to emphasize a teacher-centered instructional approach. Language teachers increasingly favored a student-centered approach focused on learning rather than instruction. CALL began to replace CALI in the early 1980s (Davies & Higgins, 1982: p. 3).[6]and it is now incorporated into the names of the growing number ofprofessional associationsworldwide.
An alternative term, technology-enhanced language learning (TELL),[7]also emerged around the early 1990s: e.g. the TELL Consortium project, University of Hull.
The current philosophy of CALL emphasizes student-centered materials that empower learners to work independently. These materials can be structured or unstructured but typically incorporate two key features: interactive and individualized learning. CALL employs tools that assist teachers in facilitating language learning, whether reinforcing classroom lessons or providing additional support to learners. The design of CALL materials typically integrates principles from language pedagogy and methodology, drawing from various learning theories such as behaviourism, cognitive theory, constructivism, and second-language acquisition theories like Stephen Krashen's.monitor hypothesis.
A combination of face-to-face teaching and CALL is usually referred to asblended learning. Blended learning is designed to increase learning potential and is more commonly found than pure CALL (Pegrum 2009: p. 27).[8]
See Davieset al.(2011: Section 1.1,What is CALL?).[9]See also Levy & Hubbard (2005), who raise the questionWhy call CALL "CALL"?[10]
CALL dates back to the 1960s, when it was first introduced on university mainframe computers. The PLATO project, initiated at the University of Illinois in 1960, is an important landmark in the early development of CALL (Marty 1981).[11]The advent of the microcomputer in the late 1970s brought computing within the range of a wider audience, resulting in a boom in the development of CALL programs and a flurry of publications of books on CALL in the early 1980s.
Dozens of CALL programs are currently available on the internet, at prices ranging from free to expensive,[12]and other programs are available only through university language courses.
There have been several attempts to document the history of CALL. Sanders (1995) covers the period from the mid-1960s to the mid-1990s, focusing on CALL in North America.[13]Delcloque (2000) documents the history of CALL worldwide, from its beginnings in the 1960s to the dawning of the new millennium.[14]Davies (2005) takes a look back at CALL's past and attempts to predict where it is going.[15]Hubbard (2009) offers a compilation of 74 key articles and book excerpts, originally published in the years 1988–2007, that give a comprehensive overview of the wide range of leading ideas and research results that have exerted an influence on the development of CALL or that show promise in doing so in the future.[16]A published review of Hubbard's collection can be found inLanguage Learning & Technology14, 3 (2010).[17]
Butler-Pascoe (2011) looks at the history of CALL from a different point of view, namely the evolution of CALL in the dual fields of educational technology and second/foreign language acquisition and the paradigm shifts experienced along the way.[18]
See also Davies et al. (2011: Section 2,History of CALL).[9]
During the 1980s and 1990s, several attempts were made to establish a CALL typology. A wide range of different types of CALL programs was identified by Davies & Higgins (1985),[19]Jones & Fortescue (1987),[20]Hardisty & Windeatt (1989)[21]and Levy (1997: pp. 118ff.).[2]These included gap-filling and Cloze programs, multiple-choice programs, free-format (text-entry) programs, adventures and simulations, action mazes, sentence-reordering programs, exploratory programs—and "total Cloze", a type of program in which the learner has to reconstruct a whole text. Most of these early programs still exist in modernised versions.
Since the 1990s, it has become increasingly difficult to categorise CALL as it now extends to the use ofblogs,wikis,social networking,podcasting,Web 2.0applications,language learning in virtual worldsandinteractive whiteboards(Davies et al. 2010: Section 3.7).[9]
Warschauer (1996)[22]and Warschauer & Healey (1998)[23]took a different approach. Rather than focusing on the typology of CALL, they identified three historical phases of CALL, classified according to their underlying pedagogical and methodological approaches:
Most CALL programs in Warschauer & Healey's first phase, Behavioristic CALL (1960s to 1970s), consisted of drill-and-practice materials in which the computer presented a stimulus and the learner provided a response. At first, both could be done only through text. The computer would analyse students' input and give feedback, and more sophisticated programs would react to students' mistakes by branching to help screens and remedial activities. While such programs and their underlying pedagogy still exist today, behaviouristic approaches to language learning have been rejected by most language teachers, and the increasing sophistication of computer technology has led CALL to other possibilities.
The second phase described by Warschauer & Healey, Communicative CALL, is based on thecommunicative approachthat became prominent in the late 1970s and 1980s (Underwood 1984).[24]In the communicative approach the focus is on using the language rather than analysis of the language, and grammar is taught implicitly rather than explicitly. It also allows for originality and flexibility in student output of language. The communicative approach coincided with the arrival of the PC, which made computing much more widely available and resulted in a boom in the development of software for language learning. The first CALL software in this phase continued to provide skill practice but not in a drill format—for example: paced reading, text reconstruction and language games—but the computer remained the tutor. In this phase, computers provided context for students to use the language, such as asking for directions to a place, and programs not designed for language learning such asSim City,SleuthandWhere in the World is Carmen Sandiego?were used for language learning. Criticisms of this approach include using the computer in an ad hoc and disconnected manner for more marginal aims rather than the central aims of language teaching.
The third phase of CALL described by Warschauer & Healey, Integrative CALL, starting from the 1990s, tried to address criticisms of the communicative approach by integrating the teaching of language skills into tasks or projects to provide direction and coherence. It also coincided with the development of multimedia technology (providing text, graphics, sound and animation) as well as Computer-mediated communication (CMC). CALL in this period saw a definitive shift from the use of the computer for drill and tutorial purposes (the computer as a finite, authoritative base for a specific task) to a medium for extending education beyond the classroom. Multimedia CALL started with interactive laser videodiscs such asMontevidisco(Schneider & Bennion 1984)[25]andA la rencontre de Philippe(Fuerstenberg 1993),[26]both of which were simulations of situations where the learner played a key role. These programs later were transferred to CD-ROMs, and newrole-playing games(RPGs) such asWho is Oscar Lake?made their appearance in a range of different languages.
In a later publication Warschauer changed the name of the first phase of CALL from Behavioristic CALL to Structural CALL and also revised the dates of the three phases (Warschauer 2000):[27]
Bax (2003)[28]took issue with Warschauer & Haley (1998) and Warschauer (2000) and proposed these three phases:
See also Bax & Chambers (2006)[29]and Bax (2011),[30]in which the topic of "normalisation" is revisited.
A basic use of CALL is in vocabulary acquisition usingflashcards, which requires quite simple programs. Such programs often make use ofspaced repetition, a technique whereby the learner is presented with the vocabulary items that need to be committed to memory at increasingly longer intervals until long-term retention is achieved. This has led to the development of a number of applications known as spaced repetition systems (SRS),[31]including the genericAnkiorSuperMemopackage and programs such as BYKI[32]and phase-6,[33]which have been designed specifically for learners of foreign languages.
Above all, careful consideration must be given topedagogyin designing CALL software, but publishers of CALL software tend to follow the latest trend, regardless of its desirability. Moreover, approaches to teaching foreign languages are constantly changing, dating back togrammar-translation, through thedirect method,audio-lingualismand a variety of other approaches, to the more recentcommunicative approachandconstructivism(Decoo 2001).[34]
Designing and creating CALL software is an extremely demanding task, calling upon a range of skills. Major CALL development projects are usually managed by a team of people:
CALL inherently supportslearner autonomy, the final of the eight conditions that Egbert et al. (2007) cite as "Conditions for Optimal Language Learning Environments". Learner autonomy places the learner firmly in control so that he or she "decides on learning goals" (Egbert et al., 2007, p. 8).[36]
It is all too easy when designing CALL software to take the comfortable route and produce a set of multiple-choice and gap-filling exercises, using a simple authoring tool (Bangs 2011),[37]but CALL is much more than this; Stepp-Greany (2002), for example, describes the creation and management of an environment incorporating aconstructivistandwhole languagephilosophy. According to constructivist theory, learners are active participants in tasks in which they "construct" new knowledge derived from their prior experience. Learners also assume responsibility for their learning, and the teacher is a facilitator rather than a purveyor of knowledge. Whole language theory embraces constructivism and postulates that language learning moves from the whole to the part, rather than building sub-skills to lead towards the higher abilities of comprehension, speaking, and writing. It also emphasises that comprehending, speaking, reading, and writing skills are interrelated, reinforcing each other in complex ways. Language acquisition is, therefore, an active process in which the learner focuses on cues and meaning and makes intelligent guesses. Additional demands are placed upon teachers working in a technological environment incorporating constructivist and whole language theories. The development of teachers' professional skills must include new pedagogical as well as technical and management skills. Regarding the issue of teacher facilitation in such an environment, the teacher has a key role to play, but there could be a conflict between the aim to create an atmosphere for learner independence and the teacher's natural feelings of responsibility. In order to avoid learners' negative perceptions, Stepp-Greany points out that it is especially important for the teacher to continue to address their needs, especially those of low-ability learners.[38]
Language teachers have been avid users of technology for a very long time. Gramophone records were among the first technological aids to be used by language teachers in order to present students with recordings of native speakers' voices, and broadcasts from foreign radio stations were used to make recordings on reel-to-reel tape recorders. Other examples of technological aids that have been used in the foreign language classroom include slide projectors, film-strip projectors, film projectors, videocassette recorders and DVD players. In the early 1960s, integrated courses (which were often described as multimedia courses) began to appear. Examples of such courses areEcouter et Parler(consisting of a coursebook and tape recordings)[39]andDeutsch durch die audiovisuelle Methode(consisting of an illustrated coursebook, tape recordings and a film-strip – based on the Structuro-Global Audio-Visual method).[40]
During the 1970s and 1980s standard microcomputers were incapable of producing sound and they had poor graphics capability. This represented a step backwards for language teachers, who by this time had become accustomed to using a range of different media in the foreign language classroom. The arrival of the multimedia computer in the early 1990s was therefore a major breakthrough as it enabled text, images, sound and video to be combined in one device and the integration of the four basic skills of listening, speaking, reading and writing (Davies 2011: Section 1).[41]
Examples of CALL programs for multimedia computers that were published on CD-ROM and DVD from the mid-1990s onwards are described by Davies (2010: Section 3).[41]CALL programs are still being published on CD-ROM and DVD, but Web-based multimedia CALL has now virtually supplanted these media.
Following the arrival of multimedia CALL, multimedia language centres began to appear in educational institutions. While multimedia facilities offer many opportunities for language learning with the integration of text, images, sound and video, these opportunities have often not been fully utilised. One of the main promises of CALL is the ability to individualise learning but, as with the language labs that were introduced into educational institutions in the 1960s and 1970s, the use of the facilities of multimedia centres has often devolved into rows of students all doing the same drills (Davies 2010: Section 3.1).[41]There is therefore a danger that multimedia centres may go the same way as the language labs. Following a boom period in the 1970s, language labs went rapidly into decline. Davies (1997: p. 28) lays the blame mainly on the failure to train teachers to use language labs, both in terms of operation and in terms of developing new methodologies, but there were other factors such as poor reliability, lack of materials and a lack of good ideas.[42]
Managing a multimedia language centre requires not only staff who have a knowledge of foreign languages and language teaching methodology but also staff with technical know-how and budget management ability, as well as the ability to combine all these into creative ways of taking advantage of what the technology can offer. A centre manager usually needs assistants for technical support, for managing resources and even the tutoring of students. Multimedia centres lend themselves to self-study and potentially self-directed learning, but this is often misunderstood. The simple existence of a multimedia centre does not automatically lead to students learning independently. Significant investment of time is essential for materials development and creating an atmosphere conducive to self-study. Unfortunately, administrators often have the mistaken belief that buying hardware by itself will meet the needs of the centre, allocating 90% of its budget to hardware and virtually ignoring software and staff training needs (Davies et al. 2011:Foreword).[43]Self-access language learning centresor independent learning centres have emerged partially independently and partially in response to these issues. In self-access learning, the focus is on developing learner autonomy through varying degrees of self-directed learning, as opposed to (or as a complement to) classroom learning. In many centres learners access materials and manage their learning independently, but they also have access to staff for help. Many self-access centres are heavy users of technology and an increasing number of them are now offering online self-access learning opportunities. Some centres have developed novel ways of supporting language learning outside the context of the language classroom (also called 'language support') by developing software to monitor students' self-directed learning and by offering online support from teachers. Centre managers and support staff may need to have new roles defined for them to support students' efforts at self-directed learning: v. Mozzon-McPherson & Vismans (2001), who refer to a new job description, namely that of the "language adviser".[44]
The emergence of theWorld Wide Web(now known simply as "the Web") in the early 1990s marked a significant change in the use of communications technology for all computer users.Emailand other forms ofelectronic communicationhad been in existence for many years, but the launch ofMosaic, the first graphicalWeb browser, in 1993 brought about a radical change in the ways in which we communicate electronically. The launch of the Web in the public arena immediately began to attract the attention of language teachers. Many language teachers were already familiar with the concept ofhypertexton stand-alone computers, which made it possible to set up non-sequential structured reading activities for language learners in which they could point to items of text or images on a page displayed on the computer screen and branch to any other pages, e.g. in a so-called "stack" as implemented in theHyperCardprogram on Apple Mac computers. The Web took this one stage further by creating a worldwide hypertext system that enabled the user to branch to different pages on computers anywhere in the world simply by pointing and clicking at a piece of text or an image. This opened up access to thousands of authentic foreign-language websites to teachers and students that could be used in a variety of ways. A problem that arose, however, was that this could lead to a good deal of time-wasting if Web browsing was used in an unstructured way (Davies 1997: pp. 42–43),[42]and language teachers responded by developing more structured activities and online exercises (Leloup & Ponterio 2003).[45]Davies (2010) lists over 500 websites, where links to online exercises can be found, along with links to online dictionaries and encyclopaedias, concordancers, translation aids and other miscellaneous resources of interest to the language teacher and learner.[46]
The launch of the (free)Hot Potatoes(Holmes & Arneil) authoring tool, which was first demonstrated publicly at the EUROCALL 1998 conference, made it possible for language teachers to create their own online interactive exercises. Other useful tools are produced by the same authors.[47]
In its early days the Web could not compete seriously withmultimediaCALL on CD-ROM and DVD. Sound and video quality was often poor, and interaction was slow. But now the Web has caught up. Sound and video are of high quality and interaction has improved tremendously, although this does depend on sufficient bandwidth being available, which is not always the case, especially in remote rural areas and developing countries. One area in which CD-ROMs and DVDs are still superior is in the presentation of listen/respond/playback activities, although such activities on the Web are continually improving.
Since the early 2000s there has been a boom in the development of so-calledWeb 2.0applications. Contrary to popular opinion, Web 2.0 is not a new version of the Web, rather it implies a shift in emphasis from Web browsing, which is essentially a one-way process (from the Web to the end-user), to making use of Web applications in the same way as one uses applications on a desktop computer. It also implies more interaction and sharing. Walker, Davies & Hewer (2011: Section 2.1)[48]list the following examples of Web 2.0 applications that language teachers are using:
There is no doubt that the Web has proved to be a main focus for language teachers, who are making increasingly imaginative use of its wide range of facilities: see Dudeney (2007)[50]and Thomas (2008).[51]Above all, the use of Web 2.0 tools calls for a careful reexamination of the role of the teacher in the classroom (Richardson 2006).[52]
Corpora have been used for many years as the basis of linguistic research and also for the compilation of dictionaries and reference works such as the Collins Cobuild series, published by HarperCollins.[53]Tribble & Barlow (2001),[54]Sinclair (2004)[55]and McEnery & Wilson (2011)[56]describe a variety of ways in which corpora can be used in language teaching.
An early reference to the use of electronic concordancers in language teaching can be found in Higgins & Johns (1984: pp. 88–94),[57]and many examples of their practical use in the classroom are described by Lamy & Klarskov Mortensen (2010).[58]
It was Tim Johns (1991), however, who raised the profile of the use of concordancers in the language classroom with his concept of Data-driven learning (DDL).[59]DDL encourages learners to work out their own rules about the meaning of words and their usage by using a concordancer to locate examples in a corpus of authentic texts. It is also possible for the teacher to use a concordancer to find examples of authentic usage to demonstrate a point of grammar or typical collocations, and to generate exercises based on the examples found. Various types of concordancers and where they can be obtained are described by Lamy & Klarskov Mortensen (2011).[58]
Robb (2003) shows how it is possible to use Google as a concordancer, but he also points out a number of drawbacks, for instance there is no control over the educational level, nationality, or other characteristics of the creators of the texts that are found, and the presentation of the examples is not as easy to read as the output of a dedicated concordancer that places the key words (i.e. the search terms) in context.[60]
Virtual worldsdate back to the adventure games and simulations of the 1970s, for exampleColossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. Language teachers discovered that it was possible to exploit these text-only programs by using them as the basis for discussion. Jones G. (1986) describes an experiment based on the Kingdom simulation, in which learners played roles as members of a council governing an imaginary kingdom. A single computer in the classroom was used to provide the stimulus for discussion, namely simulating events taking place in the kingdom: crop planting time, harvest time, unforeseen catastrophes, etc.[61]
The early adventure games and simulations led on to multi-user variants, which were known asMUDs(Multi-user domains). Like their predecessors, MUDs were text-only, with the difference that they were available to a wider online audience. MUDs then led on toMOOs(Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding: see Donaldson & Kötter (1999)[62]and (Shield 2003).[63]
The next major breakthrough in the history of virtual worlds was the graphical user interface.Lucasfilm's Habitat(1986), was one of the first virtual worlds that was graphically based, albeit only in a two-dimensional environment. Each participant was represented by a visual avatar who could interact with other avatars using text chat.
Three-dimensional virtual worlds such as Traveler andActive Worlds, both of which appeared in the 1990s, were the next important development. Traveler included the possibility of audio communication (but not text chat) between avatars who were represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use of Active Worlds as an arena for constructivist learning.[64]
The 3D world ofSecond Lifewas launched in 2003. Initially perceived as anotherrole-playing game(RPG), it began to attract the interest of language teachers with the launch of the first of the series of SLanguages conferences in 2007.[65]Walker, Davies & Hewer (2011: Section 14.2.1)[48]and Molka-Danielsen & Deutschmann (2010)[66]describe a number of experiments and projects that focus on language learning in Second Life. See also the Wikipedia articleVirtual world language learning.
To what extent Second Life and other virtual worlds will become established as important tools for teachers of foreign languages remains to be seen. It has been argued by Dudeney (2010) in hisThat's Lifeblog that Second Life is "too demanding and too unreliable for most educators". The subsequent discussion shows that this view is shared by many teachers, but many others completely disagree.[67]
Regardless of the pros and cons of Second Life, language teachers' interest in virtual worlds continues to grow. The joint EUROCALL/CALICO Virtual Worlds Special Interest Group[68]was set up in 2009, and there are now many areas in Second Life that are dedicated to language learning and teaching, for example the commercial area for learners of English, which is managed by Language Lab,[69]and free areas such as the region maintained by the Goethe-Institut[70]and the EduNation Islands.[71]There are also examples of simulations created specifically for language education, such as those produced by the EC-funded NIFLAR[72]and AVALON[73]projects. NIFLAR is implemented both in Second Life and inOpensim.
Human language technologies (HLT) comprise a number of areas of research and development that focus on the use of technology to facilitate communication in a multilingual information society. Human language technologies are areas of activity in departments of the European Commission that were formerly grouped under the headinglanguage engineering(Gupta & Schulze 2011: Section 1.1).[74]
The parts of HLT that is of greatest interest to the language teacher isnatural language processing(NLP), especiallyparsing, as well as the areas ofspeech synthesisandspeech recognition.
Speech synthesis has improved immeasurably in recent years. It is often used in electronic dictionaries to enable learners to find out how words are pronounced. At word level, speech synthesis is quite effective, the artificial voice often closely resembling a human voice. At phrase level and sentence level, however, there are often problems of intonation, resulting in speech production that sounds unnatural even though it may be intelligible. Speech synthesis as embodied intext to speech(TTS) applications is invaluable as a tool for unsighted or partially sighted people. Gupta & Schulze (2010: Section 4.1) list several examples of speech synthesis applications.[74]
Speech recognition is less advanced than speech synthesis. It has been used in a number of CALL programs, in which it is usually described asautomatic speech recognition(ASR). ASR is not easy to implement. Ehsani & Knodt (1998) summarise the core problem as follows:
"Complex cognitive processes account for the human ability to associate acoustic signals with meanings and intentions. For a computer, on the other hand, speech is essentially a series of digital values. However, despite these differences, the core problem of speech recognition is the same for both humans and machines: namely, of finding the best match between a given speech sound and its corresponding word string. Automatic speech recognition technology attempts to simulate and optimize this process computationally."[75]
Programs embodying ASR normally provide a native speaker model that the learner is requested to imitate, but the matching process is not 100% reliable and may result in a learner's perfectly intelligible attempt to pronounce a word or phrase being rejected (Davies 2010: Section 3.4.6 and Section 3.4.7).[41]
Parsing is used in a number of ways in CALL. Gupta & Schulze (2010: Section 5) describe how parsing may be used to analyse sentences, presenting the learner with a tree diagram that labels the constituent parts of speech of a sentence and shows the learner how the sentence is structured.[74]
Parsing is also used in CALL programs to analyse the learner's input and diagnose errors. Davies (2002)[76]writes:
"Discrete error analysis and feedback were a common feature of traditional CALL, and the more sophisticated programs would attempt to analyse the learner's response, pinpoint errors, and branch to help and remedial activities. ... Error analysis in CALL is, however, a matter of controversy. Practitioners who come into CALL via the disciplines ofcomputational linguistics, e.g. Natural Language Processing (NLP) and Human Language Technologies (HLT), tend to be more optimistic about the potential of error analysis by computer than those who come into CALL via language teaching. [...] An alternative approach is the use of Artificial Intelligence (AI) techniques to parse the learner's response – so-calledintelligent CALL(ICALL)– but there is a gulf between those who favour the use of AI to develop CALL programs (Matthews 1994)[77]and, at the other extreme, those who perceive this approach as a threat to humanity (Last 1989:153)".[78]
Underwood (1989)[79]and Heift & Schulze (2007)[80]present a more positive picture of AI.
Research into speech synthesis, speech recognition and parsing and how these areas of NLP can be used in CALL are the main focus of the NLP Special Interest Group[81]within theEUROCALLprofessional association and the ICALL Special Interest Group[82]within theCALICOprofessional association. The EUROCALL NLP SIG also maintains a Ning.[83]
The question of the impact of CALL in language learning and teaching has been raised at regular intervals ever since computers first appeared in educational institutions (Davies & Hewer 2011: Section 3).[84]Recent large-scale impact studies include the study edited by Fitzpatrick & Davies (2003)[85]and the EACEA (2009) study,[86]both of which were produced for the European Commission.
A distinction needs to be made between the impact and the effectiveness of CALL. Impact may be measured quantitatively and qualitatively in terms of the uptake and use ofICTin teaching foreign languages, issues of availability of hardware and software, budgetary considerations, Internet access, teachers' and learners' attitudes to the use of CALL,[87]changes in the ways in which languages are learnt and taught, and paradigm shifts in teachers' and learners' roles. Effectiveness, on the other hand, usually focuses on assessing to what extent ICT is a more effective way of teaching foreign languages compared to using traditional methods – and this is more problematic as so many variables come into play. Worldwide, the picture of the impact of CALL is extremely varied. Most developed nations work comfortably with the new technologies, but developing nations are often beset with problems of costs and broadband connectivity. Evidence on the effectiveness of CALL – as with the impact of CALL – is extremely varied and many research questions still need to be addressed and answered. Hubbard (2002) presents the results of a CALL research survey that was sent to 120 CALL professionals from around the world asking them to articulate a CALL research question they would like to see answered. Some of the questions have been answered but many more remain open.[88]Leakey (2011) offers an overview of current and past research in CALL and proposes a comprehensive model for evaluating the effectiveness of CALL platforms, programs and pedagogy.[89]
A crucial issue is the extent to which the computer is perceived as taking over the teacher's role. Warschauer (1996: p. 6) perceived the computer as playing an "intelligent" role, and claimed that a computer program "should ideally be able to understand a user's spoken input and evaluate it not just for correctness but also for appropriateness. It should be able to diagnose a student's problems with pronunciation, syntax, or usage and then intelligently decide among a range of options (e.g. repeating, paraphrasing, slowing down, correcting, or directing the student to background explanations)."[22]Jones C. (1986), on the other hand, rejected the idea of the computer being "some kind of inferior teacher-substitute" and proposed a methodology that focused more on what teachers could do with computer programs rather than what computer programs could do on their own: "in other words, treating the computer as they would any other classroom aid".[90]Warschauer's high expectations in 1996 have still not been fulfilled, and currently there is an increasing tendency for teachers to go down the route proposed by Jones, making use of a variety of new tools such ascorpora and concordancers, interactive whiteboards[3]and applications for online communication.[4]
Since the advent of the Web there has been an explosion in online learning, but to what extent it is effective is open to criticism. Felix (2003) takes a critical look at popular myths attached to online learning from three perspectives, namely administrators, teachers and students. She concludes: "That costs can be saved in this ambitious enterprise is clearly a myth, as are expectations of saving time or replacing staff with machines."[91]
As for the effectiveness of CALL in promoting the four skills, Felix (2008) claims that there is "enough data in CALL to suggest positive effects on spelling, reading and writing", but more research is needed in order to determine its effectiveness in other areas, especially speaking online. She claims that students' perceptions of CALL are positive, but she qualifies this claim by stating that the technologies need to be stable and well supported, drawing attention to concerns that technical problems may interfere with the learning process. She also points out that older students may not feel comfortable with computers and younger students may not possess the necessary meta-skills for coping effectively in the challenging new environments. Training in computer literacy for both students and teachers is essential, and time constraints may pose additional problems. In order to achieve meaningful results she recommends "time-series analysis in which the same group of students is involved in experimental and control treatment for a certain amount of time and then switched – more than once if possible".[92]
Types of technology training in CALL for language teaching professionals certainly vary. Within second language teacher education programs, namely pre-service course work, we can find "online courses along with face-to-face courses", computer technology incorporated into a more general second language education course, "technology workshops","a series of courses offered throughout the teacher education programs, and even courses specifically designed for a CALL certificate and a CALL graduate degree"[93]The Organization for Economic Cooperation and Development has identified four levels of courses with only components, namely "web-supplemented, web-dependent, mixed mod and fully online".[94]
There is a rapidly growing interest in resources about the use of technology to deliver CALL. Journals that have issues that "deal with how teacher education programs help prepare language teachers to use technology in their own classrooms" includeLanguage Learning and Technology(2002),Innovations in Language Learning and Teaching(2009) and the TESOL international professional association's publication of technology standards for TESOL includes a chapter on preparation of teacher candidates in technology use, as well as the upgrading of teacher educators to be able to provide such instruction. Both CALICO and EUROCALL have special interest groups for teacher education in CALL.[95]
The following professional associations are dedicated to the promulgation of research, development and practice relating to the use of new technologies in language learning and teaching. Most of them organise conferences and publish journals on CALL.[96]
Hong, K. H. (2010) CALL teacher education as an impetus for 12 teachers in integrating technology.ReCALL, 22 (1), 53–69.doi:10.1017/s095834400999019X
Murray, D. E. (2013)A Case for Online English Language Teacher Education. The International Research Foundation for English Language Education.http://www.tirfonline.org/wp-content/uploads/2013/04/TIRF_OLTE_One-PageSpread_2013.pdf
|
https://en.wikipedia.org/wiki/Computer-assisted_language_learning
|
Computer-assisted language learning(CALL), known ascomputer-aided instruction(CAI) in British English andcomputer-aided language instruction(CALI) in American English,[1]Levy (1997: p. 1) briefly defines it as "the exploration and study of computer applications in language teaching and learning."[2]CALL embraces a wide range ofinformation and communications technology"applications and approaches to teaching and learning foreign languages, ranging from the traditional drill-and-practice programs that characterized CALL in the 1960s and 1970s to more recent manifestations of CALL, such as those utilizedvirtual learning environmentand Web-baseddistance learning. It also extends to the use ofcorpora and concordancers, interactive whiteboards,[3]computer-mediated communication (CMC),[4]language learning in virtual worlds, andmobile-assisted language learning (MALL).[5]
The term CALI (computer-assisted language instruction) was used before CALL, originating as a subset of the broader term CAI (computer-assisted instruction). CALI fell out of favor among language teachers, however, because it seemed to emphasize a teacher-centered instructional approach. Language teachers increasingly favored a student-centered approach focused on learning rather than instruction. CALL began to replace CALI in the early 1980s (Davies & Higgins, 1982: p. 3).[6]and it is now incorporated into the names of the growing number ofprofessional associationsworldwide.
An alternative term, technology-enhanced language learning (TELL),[7]also emerged around the early 1990s: e.g. the TELL Consortium project, University of Hull.
The current philosophy of CALL emphasizes student-centered materials that empower learners to work independently. These materials can be structured or unstructured but typically incorporate two key features: interactive and individualized learning. CALL employs tools that assist teachers in facilitating language learning, whether reinforcing classroom lessons or providing additional support to learners. The design of CALL materials typically integrates principles from language pedagogy and methodology, drawing from various learning theories such as behaviourism, cognitive theory, constructivism, and second-language acquisition theories like Stephen Krashen's.monitor hypothesis.
A combination of face-to-face teaching and CALL is usually referred to asblended learning. Blended learning is designed to increase learning potential and is more commonly found than pure CALL (Pegrum 2009: p. 27).[8]
See Davieset al.(2011: Section 1.1,What is CALL?).[9]See also Levy & Hubbard (2005), who raise the questionWhy call CALL "CALL"?[10]
CALL dates back to the 1960s, when it was first introduced on university mainframe computers. The PLATO project, initiated at the University of Illinois in 1960, is an important landmark in the early development of CALL (Marty 1981).[11]The advent of the microcomputer in the late 1970s brought computing within the range of a wider audience, resulting in a boom in the development of CALL programs and a flurry of publications of books on CALL in the early 1980s.
Dozens of CALL programs are currently available on the internet, at prices ranging from free to expensive,[12]and other programs are available only through university language courses.
There have been several attempts to document the history of CALL. Sanders (1995) covers the period from the mid-1960s to the mid-1990s, focusing on CALL in North America.[13]Delcloque (2000) documents the history of CALL worldwide, from its beginnings in the 1960s to the dawning of the new millennium.[14]Davies (2005) takes a look back at CALL's past and attempts to predict where it is going.[15]Hubbard (2009) offers a compilation of 74 key articles and book excerpts, originally published in the years 1988–2007, that give a comprehensive overview of the wide range of leading ideas and research results that have exerted an influence on the development of CALL or that show promise in doing so in the future.[16]A published review of Hubbard's collection can be found inLanguage Learning & Technology14, 3 (2010).[17]
Butler-Pascoe (2011) looks at the history of CALL from a different point of view, namely the evolution of CALL in the dual fields of educational technology and second/foreign language acquisition and the paradigm shifts experienced along the way.[18]
See also Davies et al. (2011: Section 2,History of CALL).[9]
During the 1980s and 1990s, several attempts were made to establish a CALL typology. A wide range of different types of CALL programs was identified by Davies & Higgins (1985),[19]Jones & Fortescue (1987),[20]Hardisty & Windeatt (1989)[21]and Levy (1997: pp. 118ff.).[2]These included gap-filling and Cloze programs, multiple-choice programs, free-format (text-entry) programs, adventures and simulations, action mazes, sentence-reordering programs, exploratory programs—and "total Cloze", a type of program in which the learner has to reconstruct a whole text. Most of these early programs still exist in modernised versions.
Since the 1990s, it has become increasingly difficult to categorise CALL as it now extends to the use ofblogs,wikis,social networking,podcasting,Web 2.0applications,language learning in virtual worldsandinteractive whiteboards(Davies et al. 2010: Section 3.7).[9]
Warschauer (1996)[22]and Warschauer & Healey (1998)[23]took a different approach. Rather than focusing on the typology of CALL, they identified three historical phases of CALL, classified according to their underlying pedagogical and methodological approaches:
Most CALL programs in Warschauer & Healey's first phase, Behavioristic CALL (1960s to 1970s), consisted of drill-and-practice materials in which the computer presented a stimulus and the learner provided a response. At first, both could be done only through text. The computer would analyse students' input and give feedback, and more sophisticated programs would react to students' mistakes by branching to help screens and remedial activities. While such programs and their underlying pedagogy still exist today, behaviouristic approaches to language learning have been rejected by most language teachers, and the increasing sophistication of computer technology has led CALL to other possibilities.
The second phase described by Warschauer & Healey, Communicative CALL, is based on thecommunicative approachthat became prominent in the late 1970s and 1980s (Underwood 1984).[24]In the communicative approach the focus is on using the language rather than analysis of the language, and grammar is taught implicitly rather than explicitly. It also allows for originality and flexibility in student output of language. The communicative approach coincided with the arrival of the PC, which made computing much more widely available and resulted in a boom in the development of software for language learning. The first CALL software in this phase continued to provide skill practice but not in a drill format—for example: paced reading, text reconstruction and language games—but the computer remained the tutor. In this phase, computers provided context for students to use the language, such as asking for directions to a place, and programs not designed for language learning such asSim City,SleuthandWhere in the World is Carmen Sandiego?were used for language learning. Criticisms of this approach include using the computer in an ad hoc and disconnected manner for more marginal aims rather than the central aims of language teaching.
The third phase of CALL described by Warschauer & Healey, Integrative CALL, starting from the 1990s, tried to address criticisms of the communicative approach by integrating the teaching of language skills into tasks or projects to provide direction and coherence. It also coincided with the development of multimedia technology (providing text, graphics, sound and animation) as well as Computer-mediated communication (CMC). CALL in this period saw a definitive shift from the use of the computer for drill and tutorial purposes (the computer as a finite, authoritative base for a specific task) to a medium for extending education beyond the classroom. Multimedia CALL started with interactive laser videodiscs such asMontevidisco(Schneider & Bennion 1984)[25]andA la rencontre de Philippe(Fuerstenberg 1993),[26]both of which were simulations of situations where the learner played a key role. These programs later were transferred to CD-ROMs, and newrole-playing games(RPGs) such asWho is Oscar Lake?made their appearance in a range of different languages.
In a later publication Warschauer changed the name of the first phase of CALL from Behavioristic CALL to Structural CALL and also revised the dates of the three phases (Warschauer 2000):[27]
Bax (2003)[28]took issue with Warschauer & Haley (1998) and Warschauer (2000) and proposed these three phases:
See also Bax & Chambers (2006)[29]and Bax (2011),[30]in which the topic of "normalisation" is revisited.
A basic use of CALL is in vocabulary acquisition usingflashcards, which requires quite simple programs. Such programs often make use ofspaced repetition, a technique whereby the learner is presented with the vocabulary items that need to be committed to memory at increasingly longer intervals until long-term retention is achieved. This has led to the development of a number of applications known as spaced repetition systems (SRS),[31]including the genericAnkiorSuperMemopackage and programs such as BYKI[32]and phase-6,[33]which have been designed specifically for learners of foreign languages.
Above all, careful consideration must be given topedagogyin designing CALL software, but publishers of CALL software tend to follow the latest trend, regardless of its desirability. Moreover, approaches to teaching foreign languages are constantly changing, dating back togrammar-translation, through thedirect method,audio-lingualismand a variety of other approaches, to the more recentcommunicative approachandconstructivism(Decoo 2001).[34]
Designing and creating CALL software is an extremely demanding task, calling upon a range of skills. Major CALL development projects are usually managed by a team of people:
CALL inherently supportslearner autonomy, the final of the eight conditions that Egbert et al. (2007) cite as "Conditions for Optimal Language Learning Environments". Learner autonomy places the learner firmly in control so that he or she "decides on learning goals" (Egbert et al., 2007, p. 8).[36]
It is all too easy when designing CALL software to take the comfortable route and produce a set of multiple-choice and gap-filling exercises, using a simple authoring tool (Bangs 2011),[37]but CALL is much more than this; Stepp-Greany (2002), for example, describes the creation and management of an environment incorporating aconstructivistandwhole languagephilosophy. According to constructivist theory, learners are active participants in tasks in which they "construct" new knowledge derived from their prior experience. Learners also assume responsibility for their learning, and the teacher is a facilitator rather than a purveyor of knowledge. Whole language theory embraces constructivism and postulates that language learning moves from the whole to the part, rather than building sub-skills to lead towards the higher abilities of comprehension, speaking, and writing. It also emphasises that comprehending, speaking, reading, and writing skills are interrelated, reinforcing each other in complex ways. Language acquisition is, therefore, an active process in which the learner focuses on cues and meaning and makes intelligent guesses. Additional demands are placed upon teachers working in a technological environment incorporating constructivist and whole language theories. The development of teachers' professional skills must include new pedagogical as well as technical and management skills. Regarding the issue of teacher facilitation in such an environment, the teacher has a key role to play, but there could be a conflict between the aim to create an atmosphere for learner independence and the teacher's natural feelings of responsibility. In order to avoid learners' negative perceptions, Stepp-Greany points out that it is especially important for the teacher to continue to address their needs, especially those of low-ability learners.[38]
Language teachers have been avid users of technology for a very long time. Gramophone records were among the first technological aids to be used by language teachers in order to present students with recordings of native speakers' voices, and broadcasts from foreign radio stations were used to make recordings on reel-to-reel tape recorders. Other examples of technological aids that have been used in the foreign language classroom include slide projectors, film-strip projectors, film projectors, videocassette recorders and DVD players. In the early 1960s, integrated courses (which were often described as multimedia courses) began to appear. Examples of such courses areEcouter et Parler(consisting of a coursebook and tape recordings)[39]andDeutsch durch die audiovisuelle Methode(consisting of an illustrated coursebook, tape recordings and a film-strip – based on the Structuro-Global Audio-Visual method).[40]
During the 1970s and 1980s standard microcomputers were incapable of producing sound and they had poor graphics capability. This represented a step backwards for language teachers, who by this time had become accustomed to using a range of different media in the foreign language classroom. The arrival of the multimedia computer in the early 1990s was therefore a major breakthrough as it enabled text, images, sound and video to be combined in one device and the integration of the four basic skills of listening, speaking, reading and writing (Davies 2011: Section 1).[41]
Examples of CALL programs for multimedia computers that were published on CD-ROM and DVD from the mid-1990s onwards are described by Davies (2010: Section 3).[41]CALL programs are still being published on CD-ROM and DVD, but Web-based multimedia CALL has now virtually supplanted these media.
Following the arrival of multimedia CALL, multimedia language centres began to appear in educational institutions. While multimedia facilities offer many opportunities for language learning with the integration of text, images, sound and video, these opportunities have often not been fully utilised. One of the main promises of CALL is the ability to individualise learning but, as with the language labs that were introduced into educational institutions in the 1960s and 1970s, the use of the facilities of multimedia centres has often devolved into rows of students all doing the same drills (Davies 2010: Section 3.1).[41]There is therefore a danger that multimedia centres may go the same way as the language labs. Following a boom period in the 1970s, language labs went rapidly into decline. Davies (1997: p. 28) lays the blame mainly on the failure to train teachers to use language labs, both in terms of operation and in terms of developing new methodologies, but there were other factors such as poor reliability, lack of materials and a lack of good ideas.[42]
Managing a multimedia language centre requires not only staff who have a knowledge of foreign languages and language teaching methodology but also staff with technical know-how and budget management ability, as well as the ability to combine all these into creative ways of taking advantage of what the technology can offer. A centre manager usually needs assistants for technical support, for managing resources and even the tutoring of students. Multimedia centres lend themselves to self-study and potentially self-directed learning, but this is often misunderstood. The simple existence of a multimedia centre does not automatically lead to students learning independently. Significant investment of time is essential for materials development and creating an atmosphere conducive to self-study. Unfortunately, administrators often have the mistaken belief that buying hardware by itself will meet the needs of the centre, allocating 90% of its budget to hardware and virtually ignoring software and staff training needs (Davies et al. 2011:Foreword).[43]Self-access language learning centresor independent learning centres have emerged partially independently and partially in response to these issues. In self-access learning, the focus is on developing learner autonomy through varying degrees of self-directed learning, as opposed to (or as a complement to) classroom learning. In many centres learners access materials and manage their learning independently, but they also have access to staff for help. Many self-access centres are heavy users of technology and an increasing number of them are now offering online self-access learning opportunities. Some centres have developed novel ways of supporting language learning outside the context of the language classroom (also called 'language support') by developing software to monitor students' self-directed learning and by offering online support from teachers. Centre managers and support staff may need to have new roles defined for them to support students' efforts at self-directed learning: v. Mozzon-McPherson & Vismans (2001), who refer to a new job description, namely that of the "language adviser".[44]
The emergence of theWorld Wide Web(now known simply as "the Web") in the early 1990s marked a significant change in the use of communications technology for all computer users.Emailand other forms ofelectronic communicationhad been in existence for many years, but the launch ofMosaic, the first graphicalWeb browser, in 1993 brought about a radical change in the ways in which we communicate electronically. The launch of the Web in the public arena immediately began to attract the attention of language teachers. Many language teachers were already familiar with the concept ofhypertexton stand-alone computers, which made it possible to set up non-sequential structured reading activities for language learners in which they could point to items of text or images on a page displayed on the computer screen and branch to any other pages, e.g. in a so-called "stack" as implemented in theHyperCardprogram on Apple Mac computers. The Web took this one stage further by creating a worldwide hypertext system that enabled the user to branch to different pages on computers anywhere in the world simply by pointing and clicking at a piece of text or an image. This opened up access to thousands of authentic foreign-language websites to teachers and students that could be used in a variety of ways. A problem that arose, however, was that this could lead to a good deal of time-wasting if Web browsing was used in an unstructured way (Davies 1997: pp. 42–43),[42]and language teachers responded by developing more structured activities and online exercises (Leloup & Ponterio 2003).[45]Davies (2010) lists over 500 websites, where links to online exercises can be found, along with links to online dictionaries and encyclopaedias, concordancers, translation aids and other miscellaneous resources of interest to the language teacher and learner.[46]
The launch of the (free)Hot Potatoes(Holmes & Arneil) authoring tool, which was first demonstrated publicly at the EUROCALL 1998 conference, made it possible for language teachers to create their own online interactive exercises. Other useful tools are produced by the same authors.[47]
In its early days the Web could not compete seriously withmultimediaCALL on CD-ROM and DVD. Sound and video quality was often poor, and interaction was slow. But now the Web has caught up. Sound and video are of high quality and interaction has improved tremendously, although this does depend on sufficient bandwidth being available, which is not always the case, especially in remote rural areas and developing countries. One area in which CD-ROMs and DVDs are still superior is in the presentation of listen/respond/playback activities, although such activities on the Web are continually improving.
Since the early 2000s there has been a boom in the development of so-calledWeb 2.0applications. Contrary to popular opinion, Web 2.0 is not a new version of the Web, rather it implies a shift in emphasis from Web browsing, which is essentially a one-way process (from the Web to the end-user), to making use of Web applications in the same way as one uses applications on a desktop computer. It also implies more interaction and sharing. Walker, Davies & Hewer (2011: Section 2.1)[48]list the following examples of Web 2.0 applications that language teachers are using:
There is no doubt that the Web has proved to be a main focus for language teachers, who are making increasingly imaginative use of its wide range of facilities: see Dudeney (2007)[50]and Thomas (2008).[51]Above all, the use of Web 2.0 tools calls for a careful reexamination of the role of the teacher in the classroom (Richardson 2006).[52]
Corpora have been used for many years as the basis of linguistic research and also for the compilation of dictionaries and reference works such as the Collins Cobuild series, published by HarperCollins.[53]Tribble & Barlow (2001),[54]Sinclair (2004)[55]and McEnery & Wilson (2011)[56]describe a variety of ways in which corpora can be used in language teaching.
An early reference to the use of electronic concordancers in language teaching can be found in Higgins & Johns (1984: pp. 88–94),[57]and many examples of their practical use in the classroom are described by Lamy & Klarskov Mortensen (2010).[58]
It was Tim Johns (1991), however, who raised the profile of the use of concordancers in the language classroom with his concept of Data-driven learning (DDL).[59]DDL encourages learners to work out their own rules about the meaning of words and their usage by using a concordancer to locate examples in a corpus of authentic texts. It is also possible for the teacher to use a concordancer to find examples of authentic usage to demonstrate a point of grammar or typical collocations, and to generate exercises based on the examples found. Various types of concordancers and where they can be obtained are described by Lamy & Klarskov Mortensen (2011).[58]
Robb (2003) shows how it is possible to use Google as a concordancer, but he also points out a number of drawbacks, for instance there is no control over the educational level, nationality, or other characteristics of the creators of the texts that are found, and the presentation of the examples is not as easy to read as the output of a dedicated concordancer that places the key words (i.e. the search terms) in context.[60]
Virtual worldsdate back to the adventure games and simulations of the 1970s, for exampleColossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. Language teachers discovered that it was possible to exploit these text-only programs by using them as the basis for discussion. Jones G. (1986) describes an experiment based on the Kingdom simulation, in which learners played roles as members of a council governing an imaginary kingdom. A single computer in the classroom was used to provide the stimulus for discussion, namely simulating events taking place in the kingdom: crop planting time, harvest time, unforeseen catastrophes, etc.[61]
The early adventure games and simulations led on to multi-user variants, which were known asMUDs(Multi-user domains). Like their predecessors, MUDs were text-only, with the difference that they were available to a wider online audience. MUDs then led on toMOOs(Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding: see Donaldson & Kötter (1999)[62]and (Shield 2003).[63]
The next major breakthrough in the history of virtual worlds was the graphical user interface.Lucasfilm's Habitat(1986), was one of the first virtual worlds that was graphically based, albeit only in a two-dimensional environment. Each participant was represented by a visual avatar who could interact with other avatars using text chat.
Three-dimensional virtual worlds such as Traveler andActive Worlds, both of which appeared in the 1990s, were the next important development. Traveler included the possibility of audio communication (but not text chat) between avatars who were represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use of Active Worlds as an arena for constructivist learning.[64]
The 3D world ofSecond Lifewas launched in 2003. Initially perceived as anotherrole-playing game(RPG), it began to attract the interest of language teachers with the launch of the first of the series of SLanguages conferences in 2007.[65]Walker, Davies & Hewer (2011: Section 14.2.1)[48]and Molka-Danielsen & Deutschmann (2010)[66]describe a number of experiments and projects that focus on language learning in Second Life. See also the Wikipedia articleVirtual world language learning.
To what extent Second Life and other virtual worlds will become established as important tools for teachers of foreign languages remains to be seen. It has been argued by Dudeney (2010) in hisThat's Lifeblog that Second Life is "too demanding and too unreliable for most educators". The subsequent discussion shows that this view is shared by many teachers, but many others completely disagree.[67]
Regardless of the pros and cons of Second Life, language teachers' interest in virtual worlds continues to grow. The joint EUROCALL/CALICO Virtual Worlds Special Interest Group[68]was set up in 2009, and there are now many areas in Second Life that are dedicated to language learning and teaching, for example the commercial area for learners of English, which is managed by Language Lab,[69]and free areas such as the region maintained by the Goethe-Institut[70]and the EduNation Islands.[71]There are also examples of simulations created specifically for language education, such as those produced by the EC-funded NIFLAR[72]and AVALON[73]projects. NIFLAR is implemented both in Second Life and inOpensim.
Human language technologies (HLT) comprise a number of areas of research and development that focus on the use of technology to facilitate communication in a multilingual information society. Human language technologies are areas of activity in departments of the European Commission that were formerly grouped under the headinglanguage engineering(Gupta & Schulze 2011: Section 1.1).[74]
The parts of HLT that is of greatest interest to the language teacher isnatural language processing(NLP), especiallyparsing, as well as the areas ofspeech synthesisandspeech recognition.
Speech synthesis has improved immeasurably in recent years. It is often used in electronic dictionaries to enable learners to find out how words are pronounced. At word level, speech synthesis is quite effective, the artificial voice often closely resembling a human voice. At phrase level and sentence level, however, there are often problems of intonation, resulting in speech production that sounds unnatural even though it may be intelligible. Speech synthesis as embodied intext to speech(TTS) applications is invaluable as a tool for unsighted or partially sighted people. Gupta & Schulze (2010: Section 4.1) list several examples of speech synthesis applications.[74]
Speech recognition is less advanced than speech synthesis. It has been used in a number of CALL programs, in which it is usually described asautomatic speech recognition(ASR). ASR is not easy to implement. Ehsani & Knodt (1998) summarise the core problem as follows:
"Complex cognitive processes account for the human ability to associate acoustic signals with meanings and intentions. For a computer, on the other hand, speech is essentially a series of digital values. However, despite these differences, the core problem of speech recognition is the same for both humans and machines: namely, of finding the best match between a given speech sound and its corresponding word string. Automatic speech recognition technology attempts to simulate and optimize this process computationally."[75]
Programs embodying ASR normally provide a native speaker model that the learner is requested to imitate, but the matching process is not 100% reliable and may result in a learner's perfectly intelligible attempt to pronounce a word or phrase being rejected (Davies 2010: Section 3.4.6 and Section 3.4.7).[41]
Parsing is used in a number of ways in CALL. Gupta & Schulze (2010: Section 5) describe how parsing may be used to analyse sentences, presenting the learner with a tree diagram that labels the constituent parts of speech of a sentence and shows the learner how the sentence is structured.[74]
Parsing is also used in CALL programs to analyse the learner's input and diagnose errors. Davies (2002)[76]writes:
"Discrete error analysis and feedback were a common feature of traditional CALL, and the more sophisticated programs would attempt to analyse the learner's response, pinpoint errors, and branch to help and remedial activities. ... Error analysis in CALL is, however, a matter of controversy. Practitioners who come into CALL via the disciplines ofcomputational linguistics, e.g. Natural Language Processing (NLP) and Human Language Technologies (HLT), tend to be more optimistic about the potential of error analysis by computer than those who come into CALL via language teaching. [...] An alternative approach is the use of Artificial Intelligence (AI) techniques to parse the learner's response – so-calledintelligent CALL(ICALL)– but there is a gulf between those who favour the use of AI to develop CALL programs (Matthews 1994)[77]and, at the other extreme, those who perceive this approach as a threat to humanity (Last 1989:153)".[78]
Underwood (1989)[79]and Heift & Schulze (2007)[80]present a more positive picture of AI.
Research into speech synthesis, speech recognition and parsing and how these areas of NLP can be used in CALL are the main focus of the NLP Special Interest Group[81]within theEUROCALLprofessional association and the ICALL Special Interest Group[82]within theCALICOprofessional association. The EUROCALL NLP SIG also maintains a Ning.[83]
The question of the impact of CALL in language learning and teaching has been raised at regular intervals ever since computers first appeared in educational institutions (Davies & Hewer 2011: Section 3).[84]Recent large-scale impact studies include the study edited by Fitzpatrick & Davies (2003)[85]and the EACEA (2009) study,[86]both of which were produced for the European Commission.
A distinction needs to be made between the impact and the effectiveness of CALL. Impact may be measured quantitatively and qualitatively in terms of the uptake and use ofICTin teaching foreign languages, issues of availability of hardware and software, budgetary considerations, Internet access, teachers' and learners' attitudes to the use of CALL,[87]changes in the ways in which languages are learnt and taught, and paradigm shifts in teachers' and learners' roles. Effectiveness, on the other hand, usually focuses on assessing to what extent ICT is a more effective way of teaching foreign languages compared to using traditional methods – and this is more problematic as so many variables come into play. Worldwide, the picture of the impact of CALL is extremely varied. Most developed nations work comfortably with the new technologies, but developing nations are often beset with problems of costs and broadband connectivity. Evidence on the effectiveness of CALL – as with the impact of CALL – is extremely varied and many research questions still need to be addressed and answered. Hubbard (2002) presents the results of a CALL research survey that was sent to 120 CALL professionals from around the world asking them to articulate a CALL research question they would like to see answered. Some of the questions have been answered but many more remain open.[88]Leakey (2011) offers an overview of current and past research in CALL and proposes a comprehensive model for evaluating the effectiveness of CALL platforms, programs and pedagogy.[89]
A crucial issue is the extent to which the computer is perceived as taking over the teacher's role. Warschauer (1996: p. 6) perceived the computer as playing an "intelligent" role, and claimed that a computer program "should ideally be able to understand a user's spoken input and evaluate it not just for correctness but also for appropriateness. It should be able to diagnose a student's problems with pronunciation, syntax, or usage and then intelligently decide among a range of options (e.g. repeating, paraphrasing, slowing down, correcting, or directing the student to background explanations)."[22]Jones C. (1986), on the other hand, rejected the idea of the computer being "some kind of inferior teacher-substitute" and proposed a methodology that focused more on what teachers could do with computer programs rather than what computer programs could do on their own: "in other words, treating the computer as they would any other classroom aid".[90]Warschauer's high expectations in 1996 have still not been fulfilled, and currently there is an increasing tendency for teachers to go down the route proposed by Jones, making use of a variety of new tools such ascorpora and concordancers, interactive whiteboards[3]and applications for online communication.[4]
Since the advent of the Web there has been an explosion in online learning, but to what extent it is effective is open to criticism. Felix (2003) takes a critical look at popular myths attached to online learning from three perspectives, namely administrators, teachers and students. She concludes: "That costs can be saved in this ambitious enterprise is clearly a myth, as are expectations of saving time or replacing staff with machines."[91]
As for the effectiveness of CALL in promoting the four skills, Felix (2008) claims that there is "enough data in CALL to suggest positive effects on spelling, reading and writing", but more research is needed in order to determine its effectiveness in other areas, especially speaking online. She claims that students' perceptions of CALL are positive, but she qualifies this claim by stating that the technologies need to be stable and well supported, drawing attention to concerns that technical problems may interfere with the learning process. She also points out that older students may not feel comfortable with computers and younger students may not possess the necessary meta-skills for coping effectively in the challenging new environments. Training in computer literacy for both students and teachers is essential, and time constraints may pose additional problems. In order to achieve meaningful results she recommends "time-series analysis in which the same group of students is involved in experimental and control treatment for a certain amount of time and then switched – more than once if possible".[92]
Types of technology training in CALL for language teaching professionals certainly vary. Within second language teacher education programs, namely pre-service course work, we can find "online courses along with face-to-face courses", computer technology incorporated into a more general second language education course, "technology workshops","a series of courses offered throughout the teacher education programs, and even courses specifically designed for a CALL certificate and a CALL graduate degree"[93]The Organization for Economic Cooperation and Development has identified four levels of courses with only components, namely "web-supplemented, web-dependent, mixed mod and fully online".[94]
There is a rapidly growing interest in resources about the use of technology to deliver CALL. Journals that have issues that "deal with how teacher education programs help prepare language teachers to use technology in their own classrooms" includeLanguage Learning and Technology(2002),Innovations in Language Learning and Teaching(2009) and the TESOL international professional association's publication of technology standards for TESOL includes a chapter on preparation of teacher candidates in technology use, as well as the upgrading of teacher educators to be able to provide such instruction. Both CALICO and EUROCALL have special interest groups for teacher education in CALL.[95]
The following professional associations are dedicated to the promulgation of research, development and practice relating to the use of new technologies in language learning and teaching. Most of them organise conferences and publish journals on CALL.[96]
Hong, K. H. (2010) CALL teacher education as an impetus for 12 teachers in integrating technology.ReCALL, 22 (1), 53–69.doi:10.1017/s095834400999019X
Murray, D. E. (2013)A Case for Online English Language Teacher Education. The International Research Foundation for English Language Education.http://www.tirfonline.org/wp-content/uploads/2013/04/TIRF_OLTE_One-PageSpread_2013.pdf
|
https://en.wikipedia.org/wiki/Foreign-language_reading_aid
|
Asecond language(L2) is a language spoken in addition to one'sfirst language(L1). A second language may be a neighbouring language, another language of the speaker's home country, or aforeign language.
A speaker's dominant language, which is the language a speaker uses most or is most comfortable with, is not necessarily the speaker's first language. For example, the Canadian census defines first language for its purposes as "What is the language that this personfirst learnedat homein childhoodandstill understands?",[1]recognizing that for some, the earliest language may be lost, a process known aslanguage attrition. This can happen when young children start school or move to a new language environment.
The distinction between acquiring and learning was made byStephen Krashen[2]as part of hismonitor theory. According to Krashen, theacquisitionof a language is a natural process; whereaslearninga language is a conscious one. In the former, the student needs to partake in natural communicative situations. In the latter, error correction is present, as is the study of grammatical rules isolated from natural language. Not all educators in second language agree to this distinction; however, the study of how a second language islearned/acquiredis referred to assecond-language acquisition(SLA).
Research in SLA "...focuses on the developing knowledge and use of a language by children and adults who already know at least one other language... [and] a knowledge of second-language acquisition may help educational policy makers set more realistic goals for programmes for both foreign language courses and the learning of the majority language by minority language children and adults."[3]
SLA has been influenced by both linguistic andpsychologicaltheories. One of the dominantlinguistictheories hypothesizes that adeviceormoduleof sorts in the brain contains innate knowledge. Many psychological theories, on the other hand, hypothesize thatcognitive mechanisms, responsible for much of human learning, process language.
Other dominant theories and points of research include 2nd language acquisition studies (which examine if L1 findings can be transferred to L2 learning), verbal behaviour (the view that constructed linguistic stimuli can create a desired speech response), morpheme studies, behaviourism, error analysis, stages and order of acquisition, structuralism (approach that looks at how the basic units of language relate to each other according to their common characteristics), 1st language acquisition studies, contrastive analysis (approach where languages are examined in terms of differences and similarities) and inter-language (which describes the L2 learner's language as a rule-governed, dynamic system).[4]
These theories have all influenced second-language teaching and pedagogy. There are many different methods of second-language teaching, many of which stem directly from a particular theory. Common methods are thegrammar-translation method, thedirect method, theaudio-lingual method(clearly influenced by audio-lingual research and the behaviourist approach), theSilent Way,suggestopedia,community language learning, thetotal physical response method, and thecommunicative approach(highly influenced by Krashen's theories).[5]Some of these approaches are more popular than others, and are viewed to be more effective. Most language teachers do not use one singular style, but will use a mix in their teaching. This provides a more balanced approach to teaching and helps students of a variety of learning styles succeed.
The defining difference between a first language (L1) and a second language (L2) is the age the person learned the language. For example, linguistEric Lennebergusedsecond languageto meana language consciously acquiredor used by its speaker after puberty. In most cases, people never achieve the same level of fluency and comprehension in their second languages as in their first language. These views are closely associated with thecritical period hypothesis.[6][7][8][9]
In acquiring an L2, Hyltenstam found that around the age of six or seven seemed to be a cut-off point forbilingualsto achieve native-like proficiency. After that age, L2 learners could getnear-native-like-nessbut their language would, while consisting of few actual errors, have enough errors to set them apart from the L1 group. The inability of some subjects to achieve native-like proficiency must be seen in relation to theage of onset(AO).[10]Later, Hyltenstam & Abrahamsson modified their age cut-offs to argue that after childhood, in general, it becomes more and more difficult to acquire native-like-ness, but that there is no cut-off point in particular.[11]
As we are learning more and more about the brain, there is a hypothesis that when a child is going through puberty, that is the time that accentsstart.[12][13]Before a child goes through puberty, the chemical processes in the brain are more geared towards language and social communication. Whereas after puberty, the ability for learning a language without an accent has been rerouted to function in another area of the brain—most likely in the frontal lobe area promoting cognitive functions, or in the neural system of hormone allocated for reproduction and sexual organ growth.
As far as the relationship between age and eventual attainment in SLA is concerned, Krashen, Long, and Scarcella, say that people who encounter foreign language in early age, begin natural exposure to second languages and obtain better proficiency than those who learn the second language as an adult. However, when it comes to the relationship between age and rateSLA, "Adults proceed through early stages of syntactic and morphological development faster than children (where time and exposure are held constant)".[14]Also, "older children acquire faster than younger children do (again, in early stages of morphological and syntactic development where time and exposure are held constant)".[14]In other words, adults and older children are fast learners when it comes to the initial stage of foreign language education.
Gauthier and Genesee have done research which mainly focuses on the second language acquisition of internationally adopted children and results show that early experiences of one language of children can affect their ability to acquire a second language, and usually children learn their second language slower and weaker even during the critical period.[15]
As for the fluency, it is better to do foreign language education at an early age, but being exposed to a foreign language since an early age causes a "weak identification".[16]Such issue leads to a "double sense of national belonging," that makes one not sure of where they belong to because, according to Brian A. Jacob, multicultural education affects students' "relations, attitudes, and behaviors".[17]And as children learn more and more foreign languages, children start to adapt, and get absorbed into the foreign culture that they "undertake to describe themselves in ways that engage with representations others have made".[18]Due to such factors, learning foreign languages at an early age may incur one's perspective of his or her native country.[6]
Acquiring a second language can be a lifelong learning process for many. Despite persistent efforts, most learners of a second language will never become fullynative-likein it, although with practice considerable fluency can be achieved.[19]However, children by around the age of 5 have more or less mastered their first language with the exception ofvocabularyand a fewgrammaticalstructures, and the process is relatively very fast because language is a very complex skill. Moreover, if children start to learn a second language when they are seven years old or younger, they will also be fully fluent with their second language in a faster speed comparing to the speed of learning by adults who start to learn a second language later in their life.[20]
In the first language, children do not respond to systematic correction. Furthermore, children who have limited input still acquire the first language, which is a significant difference between input and output. Children are exposed to a language environment of errors and lack of correction but they end up having the capacity to figure out the grammatical rules. Error correction does not seem to have a direct influence on learning a second language. Instruction may affect the rate of learning, but the stages remain the same. Adolescents and adults who know the rule are faster than those who do not.
In the learning of a second language the correction of errors remains a controversial topic with many differing schools of thought. Throughout the last century much advancement has been made in research on the correction of students' errors. In the 1950s and 1960s, the viewpoint of the day was that all errors must be corrected at all costs. Little thought went to students' feelings or self-esteem in regards to this constant correction.[21]
In the 1970s, Dulay and Burt's studies showed that learners acquire grammar forms and structures in a pre-determined, inalterable order, and that teaching or correcting styles would not change that.[21]
In 1977, Terrell"s studies showing that there were more factors to be considered in the classroom than the cognitive processing of the students.[21]He contested that the affective side of students and their self-esteem were equally important to the teaching process.[21]
In the 1980s, the strict grammar and corrective approach of the 1950s became obsolete. Researchers asserted that correction was often unnecessary and that instead of furthering students' learning it was hindering them. The main concern at this time was relieving student stress and creating a warm environment for them. Stephen Krashen was a big proponent in this hands-off approach to error correction.[21]
The 1990s brought back the familiar idea that explicit grammar instruction and error correction was indeed useful for the SLA process. At this time, more research started to be undertaken to determine exactly which kinds of corrections are the most useful for students. In 1998, Lyster concluded that "recasts", the teacher repeating a student's incorrect utterance with the correct version, are not always the most useful because students do not notice the correction. His studies in 2002 showed that students learn better when teachers help students recognize and correct their own errors.[21]Mackey, Gas and McDonough had similar findings in 2000 and attributed the success of this method to the student's active participation in the corrective processes.[21]
According toNoam Chomsky, children will bridge the gap between input and output by their innate grammar because the input (utterances they hear) is so poor but all children end up having complete knowledge of grammar. Chomsky calls it thePoverty of Stimulus. And second language learners can do this by applying the rules they learn to the sentence-construction, for example. So learners in both their native and second language have knowledge that goes beyond what they have received, so that people can make correct utterances (phrases, sentences, questions, etc) that they have never learned or heard before.
Bilingualismhas been an advantage to today's world and being bilingual gives the opportunity to understand and communicate with people with different cultural backgrounds. However, a study done by Optiz and Degner in 2012 shows that sequential bilinguals (i.e. learn their L2 after L1) often relate themselves to the emotions more when they perceive these emotions by their first language/native language/L1, but feel less emotional when by their second language even though they know the meaning of words clearly.[22]The emotional distinction between L1 and L2 indicates that the "effective valence" of words is processed less immediate in L2 because of the delayed vocabulary/lexical access to these two languages.
Success in language learning can be measured in two ways: likelihood and quality. First language learnerswillbe successful in both measurements. It is inevitable that all people will learn a first language and with few exceptions, they will be fully successful. For second language learners, success is not guaranteed. For one, learners may become fossilized orstuckas it were with ungrammatical items. (Fossilizationoccurs when language errors become a permanent feature.)[23]The difference between learners may be significant. As noted elsewhere, L2 learners rarely achieve completenative-likecontrol of the second language.
For L2 pronunciation, there are two principles that have been put forth by Levis. The first is nativeness which means the speaker's ability to approximately reach the speaking pattern of the second language of speakers; and the second, understanding, refers to the speaker's ability to make themselves understood.[24]
Being successful in learning a second language is often found to be challenging for some individuals. Research has been done to look into why some students are more successful than others. Stern,[25]Rubin[26]and Reiss[27]are just a few of the researchers who have dedicated time to this subject. They have worked to determine what qualities make a "good language learner".[28]Some of their common findings are that a good language learner uses positive learning strategies, is an active learner who is constantly searching for meaning. Also a good language learner demonstrates a willingness to practice and use the language in real communication. He also monitors himself and his learning, has a strong drive to communicate, and has a good ear and good listening skills.[28]
Özgür and Griffiths have designed an experiment in 2013 about the relationship between differentmotivationsand second language acquisition.[29]They looked at four types of motivations—intrinsic (inner feelings of learner), extrinsic (reward from outside), integrative (attitude towards learning), and instrumental (practical needs). According to the test results, the intrinsic part has been the main motivation for these student who learn English as their second language. However, students report themselves being strongly instrumentally motivated. In conclusion, learning a second language and being successful depend on every individual.
Inpedagogyandsociolinguistics, a distinction is made between second language and foreign language, the latter is being learned for use in an area where that language is originally from another country and not spoken in the native country of the speakers. And in other words, foreign language is used from the perspective of countries; the second language is used from the perspective of individuals.[31]
For example,Englishin countries such asIndia,Pakistan,Sri Lanka,Bangladesh, thePhilippines, theNordic countriesand theNetherlandsis considered a second language by many of its speakers, because they learn it young and use it regularly; indeed in parts ofSouth Asiait is theofficial languageof the courts, government and business. The same can be said forFrenchinAlgeria,MoroccoandTunisia, although French is not an official language in any of them. In practice, French is widely used in a variety of contexts in these countries, and signs are normally printed in bothArabicand French. A similar phenomenon exists inpost-Soviet statessuch asUkraine,Uzbekistan,KyrgyzstanandKazakhstan, whereRussiancan be considered a second language, and there arelarge Russophone communities.
However, unlike inHong Kong, English is considered a foreign language inChinaowing to the lack of opportunities for use, such as historical links, media, conversation between people, and common vocabulary. Likewise, French would be considered a foreign language inRomaniaandMoldova, even though both French and Romanian areRomance languages, Romania's historical links to France, and all being members ofla Francophonie.
George H. J. Weber, a Swiss businessman and independent scholar, founder of the Andaman Association and creator of the encyclopedic andaman.org Web site, made a report in December 1997 about the number of secondary speakers of the world's leading languages.[32][33]Weber used theFischer Weltalmanachof 1986 as his primary and only source[34]for the L2-speakers data, in preparing the data in the following table. These numbers are here compared with those referred to by Ethnologue, a popular source in the linguistics field. See below Table 1.
Collecting the number of second language speakers of every language is extremely difficult and even the best estimates contain guess work. The data below are fromethnologue.comas of June 2013.[35][not specific enough to verify]
|
https://en.wikipedia.org/wiki/Second_language
|
Theagent-based modeling(ABM) community has developed several practical agent based modeling toolkits that enable individuals to develop agent-based applications. More and more such toolkits are coming into existence, and each toolkit has a variety of characteristics. Several individuals have made attempts to compare toolkits to each other (see references). Below is a chart intended to capture many of the features that are important to ABM toolkit users.
|
https://en.wikipedia.org/wiki/Comparison_of_agent-based_modeling_software
|
Agent-based computational economics(ACE) is the area ofcomputational economicsthat studies economic processes, including wholeeconomies, asdynamic systemsof interactingagents. As such, it falls in theparadigmofcomplex adaptive systems.[1]In correspondingagent-based models, the "agents" are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on incentives and information.[2]Such rules could also be the result of optimization, realized through use of AI methods (such asQ-learningand other reinforcement learning techniques).[3]
As part ofnon-equilibrium economics,[4]the theoretical assumption ofmathematical optimizationby agents inequilibriumis replaced by the less restrictive postulate of agents withbounded rationalityadaptingto market forces.[5]ACE models applynumerical methodsof analysis tocomputer-based simulationsof complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use.[6]Starting from initial conditions specified by the modeler, the computational economy evolves over time as its constituent agents repeatedly interact with each other, including learning from interactions. In these respects, ACE has been characterized as a bottom-up culture-dish approach to the study ofeconomic systems.[7]
ACE has a similarity to, and overlap with,game theoryas an agent-based method for modeling social interactions.[8]But practitioners have also noted differences from standard methods, for example in ACE events modeled being driven solely by initial conditions, whether or not equilibria exist or are computationally tractable, and in the modeling facilitation of agent autonomy and learning.[9]
The method has benefited from continuing improvements in modeling techniques ofcomputer scienceand increased computer capabilities. The ultimate scientific objective of the method is to "test theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher’s work building appropriately on the work that has gone before."[10]The subject has been applied to research areas likeasset pricing,[11]energy systems,[12]competitionandcollaboration,[13]transaction costs,[14]market structureandindustrial organizationand dynamics,[15]welfare economics,[16]andmechanism design,[17]information and uncertainty,[18]macroeconomics,[19]andMarxist economics.[20][21]
The "agents" in ACE models can represent individuals (e.g. people), social groupings (e.g. firms), biological entities (e.g. growing crops), and/or physical systems (e.g. transport systems). The ACE modeler provides the initial configuration of a computational economic system comprising multiple interacting agents. The modeler then steps back to observe the development of the system over time without further intervention. In particular, system events should be driven by agent interactions without external imposition of equilibrium conditions.[22]Issues include those common toexperimental economicsin general[23]and development of a common framework for empirical validation[24]and resolving open questions in agent-based modeling.[25]
ACE is an officially designated special interest group (SIG) of the Society for Computational Economics.[26]Researchers at theSanta Fe Institutehave contributed to the development of ACE.
One area where ACE methodology has frequently been applied is asset pricing.W. Brian Arthur, Eric Baum,William Brock, Cars Hommes, and Blake LeBaron, among others, have developed computational models in which many agents choose from a set of possible forecasting strategies in order to predict stock prices, which affects their asset demands and thus affects stock prices. These models assume that agents are more likely to choose forecasting strategies which have recently been successful. The success of any strategy will depend on market conditions and also on the set of strategies that are currently being used. These models frequently find that large booms and busts in asset prices may occur as agents switch across forecasting strategies.[11][27][28]More recently, Brock, Hommes, and Wagener (2009) have used a model of this type to argue that the introduction of new hedging instruments may destabilize the market,[29]and some papers have suggested that ACE might be a useful methodology for understanding the 2008financial crisis.[30][31][32]See also discussion underFinancial economics § Financial marketsand§ Departures from rationality.
|
https://en.wikipedia.org/wiki/Agent-based_computational_economics
|
Anartificial brain(orartificial mind) issoftwareandhardwarewith cognitive abilities similar to those of the animal orhuman brain.[1]
Research investigating "artificial brains" andbrain emulationplays three important roles in science:
An example of the first objective is the project reported by Aston University in Birmingham, England[2]where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases includingAlzheimer's,motor neuroneandParkinson's disease.
The second objective is a reply to arguments such asJohn Searle'sChinese roomargument,Hubert Dreyfus'scritique of AIorRoger Penrose's argument inThe Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, byAlan Turingin his classic paper "Computing Machinery and Intelligence".[note 1]
The third objective is generally calledartificial general intelligenceby researchers.[3]However,Ray Kurzweilprefers the term "strong AI". In his bookThe Singularity is Near, he focuses onwhole brain emulationusing conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025.Henry Markram, director of theBlue Brainproject (which is attempting brain emulation), made a similar claim (2020) at the OxfordTED conferencein 2009.[1]
Although direct humanbrain emulationusingartificial neural networkson ahigh-performance computingengine is a commonly discussed approach,[4]there are other approaches. An alternative artificial brain implementation could be based onHolographic Neural Technology (HNeT)non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to thequantum mechanical wave equation.
EvBrain[5]is a form ofevolutionary softwarethat can evolve "brainlike" neural networks, such as the network immediately behind theretina.
In November 2008, IBM received a US$4.9 million grant from thePentagonfor research into creating intelligent computers. TheBlue Brain projectis being conducted with the assistance ofIBMinLausanne.[6]The project is based on the premise that it is possible to artificially link theneurons"in the computer" by placing thirty million synapses in their proper three-dimensional position.
Some proponents of strong AI speculated in 2009 that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download thehuman brainat some time around 2050.[7]
WhileBlue Brainis able to represent complex neural connections on the large scale, the project does not achieve the link between brain activity and behaviors executed by the brain. In 2012, projectSpaun (Semantic Pointer Architecture Unified Network)attempted to model multiple parts of the human brain through large-scale representations of neural connections that generate complex behaviors in addition to mapping.[8]
Spaun's design recreates elements of human brain anatomy. The model, consisting of approximately 2.5 million neurons, includes features of the visual and motor cortices, GABAergic and dopaminergic connections, theventral tegmental area(VTA), substantia nigra, and others. The design allows for several functions in response to eight tasks, using visual inputs of typed or handwritten characters and outputs carried out by a mechanical arm. Spaun's functions include copying a drawing, recognizing images, and counting.[8]
There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic.[citation needed]In particular brains (including thehuman brain) andcognitionare not currently well understood, and the scale of computation required is unknown. Another near term limitation is that all current approaches for brain simulation require orders of magnitude larger power consumption compared with a human brain. The human brain consumes about 20Wof power, whereas current supercomputers may use as much as 1 MW—i.e., an order of 100,000 more.[citation needed]
Some critics ofbrain simulation[9]believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators[10]have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds.
|
https://en.wikipedia.org/wiki/Artificial_brain
|
Artificial life(ALifeorA-Life) is a field of study wherein researchers examinesystemsrelated to naturallife, its processes, and its evolution, through the use ofsimulationswithcomputer models,robotics, andbiochemistry.[1]The discipline was named byChristopher Langton, an Americancomputer scientist, in 1986.[2]In 1987, Langton organized the first conference on the field, inLos Alamos, New Mexico.[3]There are three main kinds of alife,[4]named for their approaches:soft,[5]fromsoftware;hard,[6]fromhardware; andwet, from biochemistry. Artificial life researchers study traditionalbiologyby trying to recreate aspects of biological phenomena.[7][8]
Artificial life studies the fundamental processes ofliving systemsin artificial environments in order to gain a deeper understanding of the complex information processing that define such systems. These topics are broad, but often includeevolutionary dynamics,emergent propertiesof collective systems,biomimicry, as well as related issues about thephilosophy of the nature of lifeand the use of lifelike properties in artistic works.[citation needed]
The modeling philosophy of artificial life strongly differs from traditional modeling by studying not only "life as we know it" but also "life as it could be".[9]
A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems.
Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes as we know them" and "processes as they could be".[10]
At present, the commonly accepteddefinition of lifedoes not consider any current alife simulations orsoftwareto be alive, and they do not constitute part of the evolutionary process of anyecosystem. However, different opinions about artificial life's potential have arisen:
Program-based simulations contain organisms with a "genome" language. This language is more often in the form of aTuring completecomputer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowingself-replication. Mutations are generally implemented as random changes to the code. Use ofcellular automatais common but not required. Another example could be anartificial intelligenceandmulti-agent system/program.
Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion). Generally, these are simulators that emphasize user creation and accessibility over mutation and evolution.
Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or otherfiniteparameters. Each parameter controls one or several aspects of an organism in a well-defined way.
These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, on learning rather than on natural selection.
Mathematical models of complex systems are of three types:black-box(phenomenological),white-box(mechanistic, based on thefirst principles) andgrey-box(mixtures of phenomenological and mechanistic models).[12][13]In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden.
Black-box models are completely nonmechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. Due to the non-transparent nature of the model, interactions of subsystems cannot be investigated. In contrast, a white-box model of a complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at the micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of a white-box model's evolution. In most cases, mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches.
Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logicalcellular automataare necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physicalontologyof the object under study. The white-box modeling represents an automatic hyper-logical inference from thefirst principlesbecause it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsicaxiomatic systemof the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem.[13]
This is a list of artificial life anddigital organismsimulators:
Hardware-based artificial life mainly consist ofrobots, that is,automaticallyguidedmachinesable to do tasks on their own.
Biochemical-based life is studied in the field ofsynthetic biology. It involves research such as the creation ofsynthetic DNA. The term "wet" is an extension of the term "wetware". Efforts toward "wet" artificial life focus on engineering live minimal cells from living bacteriaMycoplasma laboratoriumand in building non-living biochemical cell-like systems from scratch.
In May 2019, researchers reported a new milestone in the creation of a newsynthetic(possiblyartificial) form ofviablelife, a variant of thebacteriaEscherichia coli, by reducing the natural number of 64codonsin the bacterialgenometo 59 codons instead, in order to encode 20amino acids.[18][19]
Artificial life has had a controversial history.John Maynard Smithcriticized certain artificial life work in 1994 as "fact-free science".[23]
|
https://en.wikipedia.org/wiki/Artificial_life
|
Government by algorithm[1](also known asalgorithmic regulation,[2]regulation by algorithms,algorithmic governance,[3][4]algocratic governance,algorithmic legal orderoralgocracy[5]) is an alternative form ofgovernmentorsocial orderingwhere the usage of computeralgorithmsis applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration.[6][7][8][9][10]The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013.[11]A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation ofjudiciaryis in its scope.[12]In the context of blockchain, it is also known asblockchain governance.[13]
Government by algorithm raises new challenges that are not captured in thee-governmentliterature and the practice of public administration.[14]Some sources equatecyberocracy, which is a hypotheticalform of governmentthat rules by the effective use of information,[15][16][17]with algorithmic governance, although algorithms are not the only means of processing information.[18][19]Nello Cristianiniand Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms asocial machine.[20]
In 1962, the director of the Institute for Information Transmission Problems of theRussian Academy of Sciencesin Moscow (later Kharkevich Institute),[21]Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy.[22][23]In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (ProjectOGAS). This created a serious concern among CIA analysts.[24]In particular,Arthur M. Schlesinger Jr.warned that"by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employingself-teaching computers".[24]
Between 1971 and 1973, theChileangovernment carried outProject Cybersynduring thepresidency of Salvador Allende. This project was aimed at constructing a distributeddecision support systemto improve the management of the national economy.[25][2]Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by aCIA-sponsored strike of forty thousand truck drivers.[26]
Also in the 1960s and 1970s,Herbert A. Simonchampionedexpert systemsas tools for rationalization and evaluation of administrative behavior.[27]The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success.[28]Early work from this period includes Thorne McCarty's influential TAXMAN project[29]in the US and Ronald Stamper'sLEGOLproject[30]in the UK. In 1993, the computer scientistPaul Cockshottfrom theUniversity of Glasgowand the economist Allin Cottrell from theWake Forest Universitypublished the bookTowards a New Socialism, where they claim to demonstrate the possibility of a democraticallyplanned economybuilt on modern computer technology.[31]The Honourable JusticeMichael Kirbypublished a paper in 1998, where he expressed optimism that the then-available computer technologies such aslegal expert systemcould evolve to computer systems, which will strongly affect the practice of courts.[32]In 2006, attorneyLawrence Lessig, known for the slogan"Code is law", wrote:
[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible[33]
Since the 2000s, algorithms have been designed and used toautomatically analyze surveillance videos.[34]
In his 2006 bookVirtual Migration,A. Aneeshdeveloped the concept of algocracy — information technologies constrain human participation in public decision making.[35][36]Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).[37]
In 2013, algorithmic regulation was coined byTim O'Reilly, founder and CEO of O'Reilly Media Inc.:
Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.[38]
In 2017, Ukraine'sMinistry of Justiceran experimentalgovernment auctionsusingblockchaintechnology to ensure transparency and hinder corruption in governmental transactions.[39]"Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London.[40]
Asmart cityis an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance.[41]In particular, the combined use of artificial intelligence and blockchains forIoTmay lead to the creation ofsustainablesmart city ecosystems.[42]Intelligent street lightinginGlasgowis an example of successful government application of AI algorithms.[43]A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors.[44]
Thecryptocurrencymillionaire Jeffrey Berns proposed the operation oflocal governmentsinNevadaby tech firms in 2021.[45]Berns bought 67,000 acres (271 km2) in Nevada's ruralStorey County(population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000.[45]Cryptocurrency would be allowed for payments.[45]Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water[46]for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboringWashoe County.[47]A similar water pipeline proposed in 2007 was estimated to cost $100 million and would have taken about 10 years to develop.[47]With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development.[47]
InSaudi Arabia, the planners ofThe Lineassert that it will be monitored by AI to improve life by using data and predictive modeling.[48]
Tim O'Reilly suggested that data sources andreputation systemscombined in algorithmic regulation can outperform traditional regulations.[38]For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated".[38]O'Reilly's suggestion is based on thecontrol-theorericconcept offeed-back loop—improvementsanddisimprovementsof reputation enforce desired behavior.[20]The usage of feedback-loops for the management of social systems has already been suggested inmanagement cyberneticsbyStafford Beerbefore.[50]
These connections are explored byNello Cristianiniand Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by asocial machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed.[20]
China'sSocial Credit Systemwas said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception.[51][52][53]
Smart contracts,cryptocurrencies, anddecentralized autonomous organizationare mentioned as means to replace traditional ways of governance.[54][55][10]Cryptocurrencies are currencies which are enabled by algorithms without a governmentalcentral bank.[56]Central bank digital currencyoften employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China.Smart contractsare self-executablecontracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs.[57][58]A decentralized autonomous organization is anorganizationrepresented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government.[59][60][61]Smart contracts have been discussed for use in such applications as use in (temporary)employment contracts[62][63]and automatic transfership of funds and property (i.e.inheritance, upon registration of adeath certificate).[64][65][66][67]Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titlesandreal estateownership)[39][68][69][70]Ukraine is also looking at other areas too such asstate registers.[39]
According to a study ofStanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020.[1]US federal agencies counted the number ofartificial intelligenceapplications, which are listed below.[1]53% of these applications were produced by in-house experts.[1]Commercial providers of residual applications includePalantir Technologies.[71]
In 2012,NOPDstarted a collaboration with Palantir Technologies in the field ofpredictive policing.[72]Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) includeSAS.[73]
In the fight against money laundering,FinCENemploys the FinCEN Artificial Intelligence System (FAIS) since 1995.[74][75]
National health administration entities and organisations such as AHIMA (American Health Information Management Association) holdmedical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on aEuropean Health Data Spacewhich supports the use of health data.[76]
USDepartment of Homeland Securityhas employed the software ATLAS, which run onAmazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review byUSCISofficers regardingdenaturalization.[77][78]They were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came fromTerrorist Screening DatabaseandNational Crime Information Center.
TheNarxCareis a US software,[79]which combines data from the prescription registries of variousU.S. states[80][81]and usesmachine learningto generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores,[82]in a process that potentially includesEMSand criminal justice data[79]as well as court records.[83]
In Estonia, artificial intelligence is used in itse-governmentto make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment). One example is the automated registering of babies when they are born.[84]Estonia'sX-Road systemwill also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.[85]
In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.[86]
Besides using e-tenders for regularpublic works(construction of buildings, roads), e-tenders can also be used forreforestationprojects and othercarbon sinkrestoration projects.[87]Carbon sinkrestoration projectsmaybe part of thenationally determined contributionsplans in order to reach the nationalParis agreement goals.
Governmentprocurementaudit softwarecan also be used.[88][89]Audits are performed in some countries aftersubsidies have been received.
Some government agencies provide track and trace systems for services they offer. An example istrack and tracefor applications done by citizens (i.e. driving license procurement).[90]
Some government services useissue tracking systemsto keep track of ongoing issues.[91][92][93][94]
Judges' decisions in Australia are supported by the"Split Up" softwarein cases of determining the percentage of a split after adivorce.[95]COMPASsoftware is used in the USA to assess the risk ofrecidivismin courts.[96][97]According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court.[98][99][100]The Chinese AI judge is avirtual recreationof an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".[98]Also,Estoniaplans to employ artificial intelligence to decide small-claim cases of less than €7,000.[101]
Lawbotscan perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence,[102]and others vary in sophistication and dependence on scriptedalgorithms.[103]Another legal technologychatbotapplication isDoNotPay.
Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students.[104]The public high schoolWestminster Highemployed algorithms to assign grades. UK'sDepartment for Educationalso employed a statistical calculus to assign final grades inA-levels, due to the pandemic.[105]
Besides use in grading, software systems like AI were used in preparation for college entrance exams.[106]
AI teaching assistants are being developed and used for education (e.g. Georgia Tech's Jill Watson)[107][108]and there is also an ongoing debate on the possibility of teachers being entirely replaced by AI systems (e.g. inhomeschooling).[109]
In 2018, an activist named Michihito Matsuda ran for mayor in theTama city area of Tokyoas a human proxy for anartificial intelligenceprogram.[110]While election posters and campaign material used the termrobot, and displayedstock imagesof a feminineandroid, the "AI mayor" was in fact amachine learning algorithmtrained using Tama city datasets.[111]The project was backed by high-profile executives Tetsuzo Matsumoto ofSoftbankand Norio Murakami ofGoogle.[112]Michihito Matsuda came third in the election, being defeated byHiroyuki Abe.[113]Organisers claimed that the 'AI mayor' was programmed to analyzecitizen petitionsput forward to thecity councilin a more 'fair and balanced' way than human politicians.[114]
In 2018,Cesar Hidalgopresented the idea ofaugumented democracy.[115]In an augumented democracy, legislation is done bydigital twinsof every single person.
In 2019, AI-powered messengerchatbotSAM participated in the discussions on social media connected to an electoral race in New Zealand.[116]The creator of SAM, Nick Gerritsen, believed SAM would be advanced enough to run as acandidateby late 2020, when New Zealand had its next general election.[117]
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated forThe Synthetic Partyto run in the 2022Danishparliamentary election,[118]and was built by the artist collectiveComputer Lars.[119]Leader Lars differed from earlier virtual politicians by leading apolitical partyand by not pretending to be an objective candidate.[120]This chatbot engaged in critical discussions on politics with users from around the world.[121]
In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda[122]
In the2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency ofBrighton Pavilionas an AI avatar named "AI Steve",[123]saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support.[124]AI Steve placed last with 179 votes.[125]
In February 2020, China launched amobile appto deal with theCoronavirus outbreak[127]called "close-contact-detector".[128]Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights)[128]and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps likeAlipayorWeChat.[129]The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.[130]
Alipay also has theAlipay Health Codewhich is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.[131]
In Cannes, France, monitoring software has been used on footage shot byCCTVcameras, allowing to monitor their compliance to localsocial distancingandmask wearingduring the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowingfiningto be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...)[132]
Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries.[133][134]In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens.[135]Also in March 2020,Deutsche Telekomshared private cellphone data with the federal government agency,Robert Koch Institute, in order to research and prevent the spread of the virus.[136]Russia deployedfacial recognition technologyto detect quarantine breakers.[137]Italian regional health commissionerGiulio Gallerasaid that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators.[138]In USA, Europe and UK,Palantir Technologiesis taken in charge to provide COVID-19 tracking services.[139]
Tsunamiscan be detected byTsunami warning systems. They can make use of AI.[140][141]Floodingscan also be detected using AI systems.[142]Wildfirescan be predicted using AI systems.[143][144]Wildfire detection is possible by AI systems(i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires,[145]to investigate how householders responded in wildfires[146]and spotting wildfire in real time usingcomputer vision.[147][148]Earthquake detection systemsare now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates.[149][150][151]Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms ofdeep-learning, analysis, and computational models.[152]Locustbreeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase.[153]
Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective.[154][155]AsDeloitteestimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.[156]
There are potential risks associated with the use of algorithms in government. Those include:
According to a 2016's bookWeapons of Math Destruction, algorithms andbig dataare suspected to increase inequality due to opacity, scale and damage.[159]
There is also a serious concern thatgamingby the regulated parties might occur, once moretransparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even useadversarial machine learning.[1][20]According toHarari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.[160]
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived as being high risk for committingwelfare fraud, which quietly flagged thousands of people to investigators.[161]This caused a public protest. The district court of Hague shut down SyRI referencingArticle 8 of the European Convention on Human Rights(ECHR).[162]
The contributors of the 2019 documentaryiHumanexpressed apprehension of "infinitely stable dictatorships" created by government AI.[163]
Due to public criticism, the Australian government announced the suspension ofRobodebt schemekey functions in 2019, and a review of all debts raised using the programme.[164]
In 2020, algorithms assigning exam grades to students in theUK sparked open protestunder the banner "Fuck the algorithm."[105]This protest was successful and the grades were taken back.[165]
In 2020, the US government softwareATLAS, which run onAmazon Cloud, sparked uproar from activists and Amazon's own employees.[166]
In 2021, Eticas Foundation launched a database of governmental algorithms calledObservatory of Algorithms with Social Impact(OASI).[167]
An initial approach towards transparency included theopen-sourcing of algorithms.[168]Software code can be looked into and improvements can be proposed throughsource-code-hosting facilities.
A 2019 poll conducted byIE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run.[169]The following table lists the results by country:
Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable.[170]The evidence is established by survey experiments on university students of all genders.
A 2021 poll byIE Universityindicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea.[171]The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed.[171]
The 1970David Bowiesong "Saviour Machine" depicts an algocratic society run by the titular mechanism, which ended famine and war through "logic" but now threatens to cause an apocalypse due to its fear that its subjects have become excessively complacent.[172]
The novelsDaemon(2006) andFreedom™(2010) byDaniel Suarezdescribe a fictional scenario of global algorithmic regulation.[173]Matthew De Abaitua'sIf Thenimagines an algorithm supposedly based on "fairness" recreating a premodern rural economy.[174]
|
https://en.wikipedia.org/wiki/AI_mayor
|
In science, computing, and engineering, ablack boxis a system which can be viewed in terms of its inputs and outputs (ortransfer characteristics), without any knowledge of its internal workings.[1][2]Its implementation is "opaque" (black). The term can be used to refer to many inner workings, such as those of atransistor, anengine, analgorithm, thehuman brain, or an institution orgovernment.
To analyze anopen systemwith a typical "black box approach", only the behavior of the stimulus/response will be accounted for, to infer the (unknown)box. The usual representation of this "black box system" is adata flow diagramcentered in the box.
The opposite of a black box is a system where the inner components or logic are available for inspection, which is most commonly referred to as awhite box(sometimes also known as a "clear box" or a "glass box").
The modern meaning of the term "black box" seems to have entered the English language around 1945. In electroniccircuit theorythe process ofnetwork synthesisfromtransfer functions, which led to electronic circuits being regarded as "black boxes" characterized by their response to signals applied to theirports, can be traced toWilhelm Cauerwho published his ideas in their most developed form in 1941.[3]Although Cauer did not himself use the term, others who followed him certainly did describe the method as black-box analysis.[4]Vitold Belevitch[5]puts the concept of black-boxes even earlier, attributing the explicit use oftwo-port networksas black boxes toFranz Breisigin 1921 and argues that 2-terminal components were implicitly treated as black-boxes before that.
Incybernetics, a full treatment was given byRoss Ashbyin 1956.[6]A black box was described byNorbert Wienerin 1961 as an unknown system that was to be identified using the techniques ofsystem identification.[7]He saw the first step inself-organizationas being to be able to copy the output behavior of a black box. Many other engineers, scientists and epistemologists, such asMario Bunge,[8]used and perfected the black box theory in the 1960s.
Insystems theory, theblack boxis an abstraction representing a class of concreteopen systemwhich can be viewed solely in terms of itsstimuli inputsandoutput reactions:
The constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological. In other words, only the behavior of the system will be accounted for.
The understanding of ablack boxis based on the "explanatory principle", thehypothesisof acausal relationbetween theinputand theoutput. This principle states thatinputandoutputare distinct, that the system has observable (and relatable) inputs and outputs and that the system is black to the observer (non-openable).[9]
An observer makes observations over time. All observations of inputs and outputs of ablack boxcan be written in a table, in which, at each of a sequence of times, the states of thebox'svarious parts, input and output, are recorded. Thus, using an example fromAshby, examining a box that has fallen from aflying saucermight lead to this protocol:[6]
Thus, every system, fundamentally, is investigated by the collection of a long protocol, drawn out in time, showing the sequence of input and output states. From this there follows the fundamental deduction that all knowledge obtainable from a Black Box (of given input and output) is such as can be obtained by re-coding the protocol (theobservation table); all that, and nothing more.[6]
If the observer also controls input, the investigation turns into anexperiment(illustration), and hypotheses aboutcause and effectcan be tested directly.
When the experimenter is also motivated to control the box, there is active feedback in the box/observer relation, promoting what incontrol theoryis called afeed forwardarchitecture.
Themodeling processis the construction of a predictivemathematical model, using existing historic data (observation table).
A developedblack box modelis a validated model whenblack-box testingmethods[10]ensures that it is, based solely onobservableelements.
With back testing, out of time data is always used when testing the black box model. Data has to be written down before it is pulled for black box inputs.
Black box theoriesare those theories defined only in terms of their function.[11][12]The term can be applied in any field where some inquiry is made into the relations between aspects of the appearance of a system (exterior of the black box), with no attempt made to explain why those relations should exist (interior of the black box). In this context,Newton's theory of gravitationcan be described as a black box theory.[13]
Specifically, the inquiry is focused upon a system that has no immediately apparent characteristics and therefore has only factors for consideration held within itself hidden from immediate observation. The observer is assumed ignorant in the first instance as the majority of availabledatais held in an inner situation away fromfacileinvestigations. Theblack boxelement of the definition is shown as being characterised by a system where observable elements enter a perhaps imaginary box with a set of different outputs emerging which are also observable.[14]
Inhumanities disciplinessuch asphilosophy of mindandbehaviorism, one of the uses of black box theory is to describe and understandpsychologicalfactors in fields such as marketing when applied to an analysis ofconsumer behaviour.[15][16][17]
Black Box theoryis even wider in application than professional studies:
The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box.
(...) This simple rule proved very effective and is an illustration of how the Black Box principle in cybernetics can be used to control situations that, if gone into deeply, may seem very complex.A further example of the Black Box principle is the treatment of mental patients. The human brain is certainly a Black Box, and while a great deal of neurological research is going on to understand the mechanism of the brain, progress in treatment is also being made by observing patients' responses to stimuli.
|
https://en.wikipedia.org/wiki/Black_box
|
Ablackboard systemis anartificial intelligenceapproach based on theblackboard architectural model,[1][2][3][4]where a commonknowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
The following scenario provides a simple metaphor that gives some insight into how a blackboard functions:
A group of specialists are seated in a room with a largeblackboard. They work as a team to brainstorm a solution to a problem, using the blackboard as the workplace for cooperatively developing the solution.
The session begins when the problem specifications are written onto the blackboard. The specialists all watch the blackboard, looking for an opportunity to apply their expertise to the developing solution. When someone writes something on the blackboard that allows another specialist to apply their expertise, the second specialist records their contribution on the blackboard, hopefully enabling other specialists to then apply their expertise. This process of adding contributions to the blackboard continues until the problem has been solved.
A blackboard-system application consists of three major components
A blackboard system is the central space in amulti-agent system. It's used for describing the world as a communication platform for agents. To realize a blackboard in a computer program, amachine readablenotation is needed in whichfactscan be stored. One attempt in doing so is aSQL database, another option is theLearnable Task Modeling Language (LTML). The syntax of the LTML planning language is similar toPDDL, but adds extra features like control structures andOWL-Smodels.[5][6]LTML was developed in 2007[7]as part of a much larger project called POIROT (Plan Order Induction by Reasoning from One Trial),[8]which is aLearning from demonstrationsframework forprocess mining. In POIROT,Plan tracesandhypothesesare stored in the LTML syntax for creatingsemantic web services.[9]
Here is a small example: A human user is executing aworkflowin a computer game. The user presses some buttons and interacts with thegame engine. While the user interacts with the game, a plan trace is created. That means the user's actions are stored in alogfile. The logfile gets transformed into a machine readable notation which is enriched by semanticattributes. The result is atextfilein the LTML syntax which is put on the blackboard.Agents(software programs in the blackboard system) are able to parse the LTML syntax.
We start by discussing two well known early blackboard systems, BB1 and GBB, below and then discuss more recent implementations and applications.
The BB1 blackboard architecture[10]was originally inspired by studies of how humans plan to perform multiple tasks in a trip, used task-planning as a simplified example of tactical planning for theOffice of Naval Research.[11]Hayes-Roth & Hayes-Roth found that human planning was more closely modeled as an opportunistic process, in contrast to the primarily top-down planners used at the time:
While not incompatible with successive-refinement models, our view of planning is somewhat different. We share the assumption that planning processes operate in a two-dimensional planning space defined on time and abstraction dimensions. However, we assume that people's planning activity is largely opportunistic. That is, at each point in the process, the planner's current decisions and observations suggest various opportunities for plan development. The planner's subsequent decisions follow up on selected opportunities. Sometimes, these decision-sequences follow an orderly path and produce a neat top-down expansion as described above. However, some decisions and observations might also suggest less orderly opportunities for plan development.[12]
A key innovation of BB1 was that it applied this opportunistic planning model to its own control, using the same blackboard model of incremental, opportunistic, problem-solving that was applied to solve domain problems. Meta-level reasoning with control knowledge sources could then monitor whether planning and problem-solving were proceeding as expected or stalled. If stalled, BB1 could switch from one strategy to another as conditions – such as the goals being considered or the time remaining – changed. BB1 was applied in multiple domains: construction site planning,[13]inferring 3-D protein structures from X-ray crystallography,[14]intelligent tutoring systems,[15]and real-time patient monitoring.[16]
BB1 also allowed domain-general language frameworks to be designed for wide classes of problems. For example, the ACCORD[17]language framework defined a particular approach to solving configuration problems. The problem-solving approach was to incrementally assemble a solution by adding objects and constraints, one at a time. Actions in the ACCORD language framework appear as short English-like commands or sentences for specifying preferred actions, events to trigger KSes, preconditions to run a KS action, and obviation conditions to discard a KS action that is no longer relevant.
GBB[18]focused on efficiency, in contrast to BB1, which focused more on sophisticated reasoning and opportunistic planning. GBB improves efficiency by allowing blackboards to be multi-dimensional, where dimensions can be either ordered or not, and then by increasing the efficiency of pattern matching. GBB1,[19]one of GBB's control shells implements BB1's style of control while adding efficiency improvements.
Other well-known of early academic blackboard systems are the Hearsay IIspeech recognitionsystem andDouglas Hofstadter'sCopycatand Numbo projects.
Some more recent examples of deployed real-world applications include:
Blackboard systems are used routinely in many militaryC4ISTARsystems for detecting and tracking objects. Another example of current use is inGame AI, where they are considered a standard AI tool to help with adding AI to video games.[22][23]
Blackboard-like systems have been constructed within modernBayesianmachine learningsettings, using agents to add and removeBayesian networknodes. In these 'Bayesian Blackboard' systems, the heuristics can acquire more rigorous probabilistic meanings as proposal and acceptances inMetropolis Hastings samplingthough the space of possible structures.[24][25][26]Conversely, using these mappings, existing Metropolis-Hastings samplers over structural spaces may now thus be viewed as forms of blackboard systems even when not named as such by the authors. Such samplers are commonly found inmusical transcriptionalgorithms for example.[27]
Blackboard systems have also been used to build large-scale intelligent systems for the annotation of media content, automating parts of traditional social science research. In this domain, the problem of integrating various AI algorithms into a single intelligent system arises spontaneously, with blackboards providing a way for a collection of distributed, modularnatural language processingalgorithms to each annotate the data in a central space, without needing to coordinate their behavior.[28]
|
https://en.wikipedia.org/wiki/Blackboard_system
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Acomplex systemis asystemcomposed of many components which may interact with each other.[1]Examples of complex systems are Earth's globalclimate,organisms, thehuman brain, infrastructure such as power grid, transportation or communication systems, complexsoftwareand electronic systems, social and economic organizations (likecities), anecosystem, a livingcell, and, ultimately, for some authors, the entireuniverse.[2][3][4]
The behavior of a complex system is intrinsically difficult to model due to the dependencies, competitions, relationships, and other types of interactions between their parts or between a given system and its environment.[5]Systems that are "complex" have distinct properties that arise from these relationships, such asnonlinearity,emergence,spontaneous order,adaptation, andfeedback loops, among others.[6]Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links represent their interactions.
The termcomplex systemsoften refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment.[7]The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm toreductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.
As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study ofself-organizationand critical phenomena from physics, ofspontaneous orderfrom the social sciences,chaosfrom mathematics,adaptationfrom biology, and many others.Complex systemsis therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, includingstatistical physics,information theory,nonlinear dynamics,anthropology,computer science,meteorology,sociology,economics,psychology, andbiology.
Complex systems can be:
Complex adaptive systemsare special cases of complex systems that areadaptivein that they have the capacity to change and learn from experience.[11]Examples of complex adaptive systems include the internationaltrademarkets, social insect andantcolonies, thebiosphereand theecosystem, thebrainand theimmune system, thecelland the developingembryo, cities,manufacturing businessesand any human social group-based endeavor in a cultural andsocial systemsuch aspolitical partiesorcommunities.[12]
A system isdecomposableif the parts of the system (subsystems) are independent from each other, for exemple the moel of aperfect gasconsider the relations among molecules negligeable.[13]
In anearly decomposablesystem, the interactions between subsystems are weak but not negligeable, this is often the case in social systems.[13]Conceptually, a system is nearly decomposable if the variables composing it can be separated into classes and subclasses, if these variables are independent for many functions but affect each other, and if the whole system is greater than the parts.[14]
Complex systems may have the following features:[15]
In 1948, Dr. Warren Weaver published an essay on "Science and Complexity",[31]exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole."
While the explicit study of complex systems dates at least to the 1970s,[32]the first research institute focused on complex systems, theSanta Fe Institute, was founded in 1984.[33][34]Early Santa Fe Institute participants included physics Nobel laureatesMurray Gell-MannandPhilip Anderson, economics Nobel laureateKenneth Arrow, and Manhattan Project scientistsGeorge CowanandHerb Anderson.[35]Today, there are over 50 institutes and research centers focusing on complex systems.[citation needed]
Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.[36]
The 2021Nobel Prize in Physicswas awarded toSyukuro Manabe,Klaus Hasselmann, andGiorgio Parisifor their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.[37]
The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.
Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay.[38]As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.[39]
Over the last decades, within the emerging field ofcomplexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by theSanta Fe Institutein 1989 and the more recenteconomic complexity index(ECI), introduced by theMITphysicistCesar A. Hidalgoand theHarvardeconomistRicardo Hausmann.
Recurrence quantification analysishas been employed to detect the characteristic ofbusiness cyclesandeconomic development. To this end, Orlando et al.[40]developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al.,[41]over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics.
Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".[42]
Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops.[43]Complexity science in healthcare framesknowledge translationas a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems,complexity scienceadvocates for continuous stakeholder engagement,transdisciplinarycollaboration, and flexible strategies to effectively translate research into practice.[43]
Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field offractal physiology, bodily signals, such as heart rate or brain activity, are characterized usingentropyor fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.[citation needed]
Complex systems theory is related tochaos theory, which in turn has its origins more than a century ago in the work of the French mathematicianHenri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order.[44]Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy.
The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex.[45]This is referred to as the "edge of chaos".[46]
When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander,[47]the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.[45]For recent examples in economics and business see Stoop et al.[48]who discussedAndroid's market position, Orlando[49]who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al.[50]who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model.
Therefore, the main difference between chaotic systems and complex systems is their history.[51]Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'.[clarification needed]On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicistMurray Gell-Manncalled "an accumulation of frozen accidents".[52]In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.
A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions.[53][54]For example, theInternetcan be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies,[55]airline networks,[56]and biological networks.
|
https://en.wikipedia.org/wiki/Complex_systems
|
Adiscrete-event simulation(DES) models the operation of asystemas a (discrete)sequence of eventsin time. Each event occurs at a particular instant in time and marks a change ofstatein the system.[1]Between consecutive events, no change in the system is assumed to occur; thus the simulation time can directly jump to the occurrence time of the next event, which is callednext-event time progression.
In addition to next-event time progression, there is also an alternative approach, calledincremental time progression, where time is broken up into small time slices and the system state is updated according to the set of events/activities happening in the time slice.[2]Because not every time slice has to be simulated, a next-event time simulation can typically run faster than a corresponding incremental time simulation.
Both forms of DES contrast withcontinuous simulationin which the system state is changed continuously over time on the basis of a set ofdifferential equationsdefining the rates of change for state variables.
In the past, these three types of simulation have also been referred to, respectively, as: event scheduling simulation, activity scanning simulation, and process interaction simulation. It can also be noted that there are similarities between the implementation of the event queue in event scheduling, and thescheduling queueused in operating systems.
A common exercise in learning how to build discrete-event simulations is to model aqueueing system, such as customers arriving at a bank teller to be served by a clerk. In this example, the system objects areCustomerandTeller, while the system events areCustomer-Arrival,Service-StartandService-End. Each of these events comes with its own dynamics defined by the following event routines:
Therandom variablesthat need to be characterized to model this systemstochasticallyare theinterarrival-timefor recurrentCustomer-Arrivalevents and theservice-timefor the delays ofService-Endevents.
A system state is a set of variables that captures the salient properties of the system to be studied. The state trajectory over time S(t) can be mathematically represented by astep functionwhose value can change whenever an event occurs.
The simulation must keep track of the current simulation time, in whatever measurement units are suitable for the system being modeled. In discrete-event simulations, as opposed to continuous simulations, time 'hops' because events are instantaneous – the clock skips to the next event start time as the simulation proceeds.
The simulation maintains at least one list of simulation events. This is sometimes called thepending event setbecause it lists events that are pending as a result of previously simulated event but have yet to be simulated themselves. An event is described by the time at which it occurs and a type, indicating the code that will be used to simulate that event. It is common for the event code to be parametrized, in which case, the event description also contains parameters to the event code.[citation needed]The event list is also referred to as thefuture event list(FEL) orfuture event set(FES).[3][4][5][6]
When events are instantaneous, activities that extend over time are modeled as sequences of events. Some simulation frameworks allow the time of an event to be specified as an interval, giving the start time and the end time of each event.[citation needed]
Single-threadedsimulation engines based on instantaneous events have just one current event. In contrast,multi-threadedsimulation engines and simulation engines supporting an interval-based event model may have multiple current events. In both cases, there are significant problems with synchronization between current events.[citation needed]
The pending event set is typically organized as apriority queue,sortedby event time.[7]That is, regardless of the order in which events are added to the event set, they are removed in strictly chronological order. Various priority queue implementations have been studied in the context of discrete event simulation;[8]alternatives studied have includedsplay trees,skip lists,calendar queues,[9]and ladder queues.[10][11]Onmassively-parallel machines, such asmulti-coreormany-coreCPUs, the pending event set can be implemented by relying onnon-blocking algorithms, in order to reduce the cost of synchronization among the concurrent threads.[12][13]
Typically, events are scheduled dynamically as the simulation proceeds. For example, in the bank example noted above, the event CUSTOMER-ARRIVAL at time t would, if the CUSTOMER_QUEUE was empty and TELLER was idle, include the creation of the subsequent event CUSTOMER-DEPARTURE to occur at time t+s, where s is a number generated from the SERVICE-TIME distribution.[citation needed]
The simulation needs to generaterandom variablesof various kinds, depending on the system model. This is accomplished by one or morePseudorandom number generators. The use of pseudo-random numbers as opposed to true random numbers is a benefit should a simulation need a rerun with exactly the same behavior.
One of the problems with the random number distributions used in discrete-event simulation is that the steady-state distributions of event times may not be known in advance. As a result, the initial set of events placed into the pending event set will not have arrival times representative of the steady-state distribution. This problem is typically solved by bootstrapping the simulation model. Only a limited effort is made to assign realistic times to the initial set of pending events. These events, however, schedule additional events, and with time, the distribution of event times approaches its steady state. This is calledbootstrappingthe simulation model. In gathering statistics from the running model, it is important to either disregard events that occur before the steady state is reached or to run the simulation for long enough that the bootstrapping behavior is overwhelmed by steady-state behavior. (This use of the termbootstrappingcan be contrasted with its use in bothstatisticsandcomputing).
The simulation typically keeps track of the system'sstatistics, which quantify the aspects of interest. In the bank example, it is of interest to track the mean waiting times. In a simulation model, performance metrics are not analytically derived fromprobability distributions, but rather as averages overreplications, that is different runs of the model.Confidence intervalsare usually constructed to help assess the quality of the output.
Because events are bootstrapped, theoretically a discrete-event simulation could run forever. So the simulation designer must decide when the simulation will end. Typical choices are "at time t" or "after processing n number of events" or, more generally, "when statistical measure X reaches the value x".
Pidd (1998) has proposed the three-phased approach to discrete event simulation. In this approach, the first phase is to jump to the next chronological event. The second phase is to execute all events that unconditionally occur at that time (these are called B-events). The third phase is to execute all events that conditionally occur at that time (these are called C-events). The three phase approach is a refinement of the event-based approach in which simultaneous events are ordered so as to make the most efficient use of computer resources. The three-phase approach is used by a number of commercial simulation software packages, but from the user's point of view, the specifics of the underlying simulation method are generally hidden.
Simulation approaches are particularly well equipped to help users diagnose issues in complex environments. Thetheory of constraintsillustrates the importance of understanding bottlenecks in a system. Identifying and removing bottlenecks allows improving processes and the overall system. For instance, in manufacturing enterprises bottlenecks may be created by excess inventory,overproduction, variability in processes and variability in routing or sequencing. By accurately documenting the system with the help of a simulation model it is possible to gain a bird’s eye view of the entire system.
A working model of a system allows management to understand performance drivers. A simulation can be built to include any number ofperformance indicatorssuch as worker utilization, on-time delivery rate, scrap rate, cash cycles, and so on.
An operating theater is generally shared between several surgical disciplines. Through better understanding the nature of these procedures it may be possible to increase the patient throughput.[14]Example: If a heart surgery takes on average four hours, changing an operating room schedule from eight available hours to nine will not increase patient throughput. On the other hand, if a hernia procedure takes on average twenty minutes providing an extra hour may also not yield any increased throughput if the capacity and average time spent in the recovery room is not considered.
Many systems improvement ideas are built on sound principles, proven methodologies (Lean,Six Sigma,TQM, etc.) yet fail to improve the overall system. A simulation model allows the user to understand and test a performance improvement idea in the context of the overall system.
Simulation modeling is commonly used to model potential investments. Through modeling investments decision-makers can make informed decisions and evaluate potential alternatives.
Discrete event simulation is used in computer network to simulate new protocols, different system architectures (distributed, hierarchical, centralised, P2P) before actual deployment. It is possible to define different evaluation metrics, such as service time, bandwidth, dropped packets, resource consumption, and so on.
System modeling approaches:
Computational techniques:
Software:
Disciplines:
|
https://en.wikipedia.org/wiki/Discrete_event_simulation
|
Distributed artificial intelligence(DAI) also called Decentralized Artificial Intelligence[1]is a subfield ofartificial intelligenceresearch dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field ofmulti-agent systems.
Multi-agent systems and distributed problem solving are the two main DAI approaches. There are numerous applications and tools.
Distributed Artificial Intelligence (DAI) is an approach to solving complex learning,planning, and decision-making problems. It isembarrassingly parallel, thus able to exploit large scale computation andspatial distributionofcomputing resources. These properties allow it to solve problems that require the processing of very largedata sets. DAI systems consist of autonomous learningprocessing nodes(agents), that are distributed, often at a very large scale. DAI nodes can act independently, and partial solutions are integrated by communication between nodes,often asynchronously. By virtue of their scale, DAI systems are robust and elastic, and by necessity, loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in the problem definition or underlying data sets due to the scale and difficulty in redeployment.
DAI systems do not require all the relevant data to beaggregatedin a single location, in contrast tomonolithicorcentralizedArtificial Intelligence systems which have tightly coupled and geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or hashed impressions of very largedatasets. In addition, the source dataset may change or be updated during the course of the execution of a DAI system.
In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents.[2]Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into multi-agent systems and distributed problem solving.[3]Inmulti-agent systemsthe main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized.
The objectives of Distributed Artificial Intelligence are to solve thereasoning, planning, learning and perception problems ofartificial intelligence, especially if they require large data, by distributing the problem to autonomous processing nodes (agents). To reach the objective, DAI requires:
There are many reasons for wanting to distribute intelligence or cope with multi-agent systems. Mainstream problems in DAI research include the following:
Two types of DAI has emerged:
DAI can apply a bottom-up approach to AI, similar to thesubsumption architectureas well as the traditional top-down
approach of AI. In addition, DAI can also be a vehicle foremergence.
The challenges in Distributed AI are:
Areas where DAI have been applied are:
DAI integration in tools has included:
Notion of Agents: Agents can be described as distinct entities with standard boundaries and interfaces designed for problem solving.
Notion of Multi-Agents: Multi-Agent system is defined as a network of agents which are loosely coupled working as a single entity like society for problem solving that an individual agent cannot solve.
The key concept used in DPS and MABS is the abstraction calledsoftware agents. An agent is a virtual (or physical)autonomousentity that has an understanding of its environment and acts upon it. An agent is usually able to communicate with other agents in the same system to achieve a common goal, that one agent alone could not achieve. This communication system uses anagent communication language.
A first classification that is useful is to divide agents into:
Well-recognized agent architectures that describe how an agent is internally structured are:
|
https://en.wikipedia.org/wiki/Distributed_artificial_intelligence
|
Evolutionary computationfromcomputer scienceis a family ofalgorithmsforglobal optimizationinspired bybiological evolution, and the subfield ofartificial intelligenceandsoft computingstudying these algorithms. In technical terms, they are a family of population-basedtrial and errorproblem solvers with ametaheuristicorstochastic optimizationcharacter.
In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes as well as, depending on the method, mixing parental information. In biological terminology, apopulationof solutions is subjected tonatural selection(orartificial selection),mutationand possiblyrecombination. As a result, the population will graduallyevolveto increase infitness, in this case the chosenfitness functionof the algorithm.
Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings, making them popular incomputer science. Many variants and extensions exist, suited to more specific families of problems and data structures. Evolutionary computation is also sometimes used inevolutionary biologyas anin silicoexperimental procedure to study common aspects of general evolutionary processes.
The concept of mimicking evolutionary processes to solve problems originates before the advent of computers, such as whenAlan Turingproposed a method of genetic search in 1948 .[1]Turing's B-typeu-machinesresemble primitiveneural networks, and connections between neurons were learnt via a sort ofgenetic algorithm. His P-typeu-machinesresemble a method forreinforcement learning, where pleasure and pain signals direct the machine to learn certain behaviors. However, Turing's paper went unpublished until 1968, and he died in 1954, so this early work had little to no effect on the field of evolutionary computation that was to develop.[2]
Evolutionary computing as a field began in earnest in the 1950s and 1960s.[1]There were several independent attempts to use the process of evolution in computing at this time, which developed separately for roughly 15 years. Three branches emerged in different places to attain this goal:evolution strategies,evolutionary programming, andgenetic algorithms. A fourth branch,genetic programming, eventually emerged in the early 1990s. These approaches differ in the method of selection, the permitted mutations, and the representation of genetic data. By the 1990s, the distinctions between the historic branches had begun to blur, and the term 'evolutionary computing' was coined in 1991 to denote a field that exists over all four paradigms.[3]
In 1962,Lawrence J. Fogelinitiated the research ofEvolutionary Programmingin the United States, which was considered anartificial intelligenceendeavor. In this system,finite state machinesare used to solve a prediction problem: these machines would be mutated (adding or deleting states, or changing the state transition rules), and the best of these mutated machines would be evolved further in future generations. The final finite state machine may be used to generate predictions when needed. The evolutionary programming method was successfully applied to prediction problems, system identification, and automatic control. It was eventually extended to handle time series data and to model the evolution of gaming strategies.[3]
In 1964,Ingo RechenbergandHans-Paul Schwefelintroduce the paradigm ofevolution strategiesin Germany.[3]Since traditionalgradient descenttechniques produce results that may get stuck in local minima, Rechenberg and Schwefel proposed that random mutations (applied to all parameters of some solution vector) may be used to escape these minima. Child solutions were generated from parent solutions, and the more successful of the two was kept for future generations. This technique was first used by the two to successfully solve optimization problems influid dynamics.[4]Initially, this optimization technique was performed without computers, instead relying on dice to determine random mutations. By 1965, the calculations were performed wholly by machine.[3]
John Henry Hollandintroducedgenetic algorithmsin the 1960s, and it was further developed at theUniversity of Michiganin the 1970s.[5]While the other approaches were focused on solving problems, Holland primarily aimed to use genetic algorithms to study adaptation and determine how it may be simulated. Populations of chromosomes, represented as bit strings, were transformed by an artificial selection process, selecting for specific 'allele' bits in the bit string. Among other mutation methods, interactions between chromosomes were used to simulate therecombinationof DNA between different organisms. While previous methods only tracked a single optimal organism at a time (having children compete with parents), Holland's genetic algorithms tracked large populations (having many organisms compete each generation).
By the 1990s, a new approach to evolutionary computation that came to be calledgenetic programmingemerged, advocated for byJohn Kozaamong others.[3]In this class of algorithms, the subject of evolution was itself a program written in ahigh-level programming language(there had been some previous attempts as early as 1958 to use machine code, but they met with little success). For Koza, the programs wereLispS-expressions, which can be thought of as trees of sub-expressions. This representation permits programs to swap subtrees, representing a sort of genetic mixing. Programs are scored based on how well they complete a certain task, and the score is used for artificial selection. Sequence induction, pattern recognition, and planning were all successful applications of the genetic programming paradigm.
Many other figures played a role in the history of evolutionary computing, although their work did not always fit into one of the major historical branches of the field. The earliest computational simulations ofevolutionusingevolutionary algorithmsandartificial lifetechniques were performed byNils Aall Barricelliin 1953, with first results published in 1954.[6]Another pioneer in the 1950s wasAlex Fraser, who published a series of papers on simulation ofartificial selection.[7]As academic interest grew, dramatic increases in the power of computers allowed practical applications, including the automatic evolution of computer programs.[8]Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers, and also to optimize the design of systems.[9][10]
Evolutionary computing techniques mostly involvemetaheuristicoptimizationalgorithms. Broadly speaking, the field includes:
A thorough catalogue with many other recently proposed algorithms has been published in theEvolutionary Computation Bestiary.[11]It is important to note that many recent algorithms, however, have poor experimental validation.[12]
Evolutionary algorithmsform a subset of evolutionary computation in that they generally only involve techniques implementing mechanisms inspired bybiological evolutionsuch asreproduction,mutation,recombinationandnatural selection.Candidate solutionsto the optimization problem play the role of individuals in a population, and thecost functiondetermines the environment within which the solutions "live" (see alsofitness function).Evolutionof thepopulationthen takes place after the repeated application of the above operators.
In this process, there are two main forces that form the basis of evolutionary systems:Recombination(e.g.crossover) andmutationcreate the necessary diversity and thereby facilitate novelty, whileselectionacts as a force increasing quality.
Many aspects of such an evolutionary process arestochastic. Changed pieces of information due to recombination and mutation are randomly chosen. On the other hand, selection operators can be either deterministic, or stochastic. In the latter case, individuals with a higherfitnesshave a higher chance to be selected than individuals with a lowerfitness, but typically even the weak individuals have a chance to become a parent or to survive.
Genetic algorithmsdeliver methods to modelbiological systemsandsystems biologythat are linked to the theory ofdynamical systems, since they are used to predict the future states of the system. This is just a vivid (but perhaps misleading) way of drawing attention to the orderly, well-controlled and highly structured character of development in biology.
However, the use of algorithms and informatics, in particular ofcomputational theory, beyond the analogy to dynamical systems, is also relevant to understand evolution itself.
This view has the merit of recognizing that there is no central control of development; organisms develop as a result of local interactions within and between cells. The most promising ideas about program-development parallels seem to us to be ones that point to an apparently close analogy between processes within cells, and the low-level operation of modern computers.[13]Thus, biological systems are like computational machines that process input information to compute next states, such that biological systems are closer to a computation than classical dynamical system.[14]
Furthermore, following concepts fromcomputational theory, micro processes in biological organisms are fundamentally incomplete and undecidable (completeness (logic)), implying that “there is more than a crude metaphor behind the analogy between cells and computers.[15]
The analogy to computation extends also to the relationship betweeninheritance systemsand biological structure, which is often thought to reveal one of the most pressing problems in explaining the origins of life.
Evolutionary automata[16][17][18], a generalization ofEvolutionary Turing machines[19][20], have been introduced in order to investigate more precisely properties of biological and evolutionary computation. In particular, they allow to obtain new results on expressiveness of evolutionary computation[18][21]. This confirms the initial result about undecidability of natural evolution and evolutionary algorithms and processes.Evolutionary finite automata, the simplest subclass of Evolutionary automata working interminal modecan accept arbitrary languages over a given alphabet, including non-recursively enumerable (e.g., diagonalization language) and recursively enumerable but not recursive languages (e.g., language of the universal Turing machine)[22].
The list of active researchers is naturally dynamic and non-exhaustive. A network analysis of the community was published in 2007.[23]
While articles on or using evolutionary computation permeate the literature, several journals are dedicated to evolutionary computation:
The main conferences in the evolutionary computation area include
|
https://en.wikipedia.org/wiki/Evolutionary_computation
|
Friendly artificial intelligence(friendly AIorFAI) is hypotheticalartificial general intelligence(AGI) that would have a positive (benign) effect on humanity or at leastalignwith human interests such as fostering the improvement of the human species. It is a part of theethics of artificial intelligenceand is closely related tomachine ethics. While machine ethics is concerned with how an artificially intelligent agentshouldbehave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
The term was coined byEliezer Yudkowsky,[1]who is best known for popularizing the idea,[2][3]to discusssuperintelligentartificial agents that reliably implement human values.Stuart J. RussellandPeter Norvig's leadingartificial intelligencetextbook,Artificial Intelligence: A Modern Approach, describes the idea:[2]
Yudkowsky (2008) goes into more detail about how to design aFriendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.
"Friendly" is used in this context astechnical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidlyexplode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.[4]
The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as thegolem, or the proto-robots ofGerbert of AurillacandRoger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.[5]By 1942 these themes promptedIsaac Asimovto create the "Three Laws of Robotics"—principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm.[6]
In modern times as the prospect ofsuperintelligent AIlooms nearer, philosopherNick Bostromhas said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:
Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'
In 2008, Eliezer Yudkowsky called for the creation of "friendly AI" to mitigateexistential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[7]
Steve Omohundrosays that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number ofbasic "drives", such as resource acquisition,self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.[8][9]
Alexander Wissner-Grosssays that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.[10][11]
Luke Muehlhauser, writing for theMachine Intelligence Research Institute, recommends thatmachine ethicsresearchers adopt whatBruce Schneierhas called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.[12]
In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI';[13]nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable.[14][15]
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".[16]
Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first studyhuman natureand then produce the AI that humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[16]The appeal to anobjective through contingent human nature(perhaps expressed, for mathematical purposes, in the form of autility functionor otherdecision-theoreticformalism), as providing the ultimate criterion of "Friendliness", is an answer to themeta-ethicalproblem of defining anobjective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
Steve Omohundrohas proposed a "scaffolding" approach toAI safety, in which one provably safe AI generation helps build the next provably safe generation.[17]
Seth Baumargues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".[18]
In his bookHuman Compatible, AI researcherStuart J. Russelllists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:[19]: 173
The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future."[19]: 173Similarly, "behavior" includes any choice between options,[19]: 177and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.[19]: 201
James Barrat, author ofOur Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like theInternational Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to theAsilomar Conference on Recombinant DNA, which discussedrisks of biotechnology.[17]
John McGinnisencourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to theNational Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of theMachine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.[20]
Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing inThe Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.[21]Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser andNick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had.[13]In an article inAI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.[14]
Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.[22]Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journalThe New Atlantis, say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes.[23]
The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability.[24]
|
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence
|
Empiricalmethods
Prescriptiveand policy
Game theoryis the study ofmathematical modelsof strategic interactions.[1]It has applications in many fields ofsocial science, and is used extensively ineconomics,logic,systems scienceandcomputer science.[2]Initially, game theory addressed two-personzero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range ofbehavioral relations. It is now anumbrella termfor thescienceof rationaldecision makingin humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof byJohn von Neumann. Von Neumann's original proof used theBrouwer fixed-point theoremon continuous mappings into compactconvex sets, which became a standard method in game theory andmathematical economics. His paper was followed byTheory of Games and Economic Behavior(1944), co-written withOskar Morgenstern, which consideredcooperative gamesof several players.[3]The second edition provided anaxiomatic theoryofexpected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied toevolutionin the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields.John Maynard Smithwas awarded theCrafoord Prizefor his application ofevolutionary game theoryin 1999, and fifteen game theorists have won theNobel Prize in economicsas of 2020, including most recentlyPaul MilgromandRobert B. Wilson.
In 1713, a letter attributed to Charles Waldegrave, an activeJacobiteand uncle to British diplomatJames Waldegrave, analyzed a game called "le her". Waldegrave provided aminimaxmixed strategysolution to a two-person version of the card game, and the problem is now known as theWaldegrave problem.[4][5]
In 1838,Antoine Augustin Cournotprovided amodel of competitioninoligopolies. Though he did not refer to it as such, he presented a solution that is theNash equilibriumof the game in hisRecherches sur les principes mathématiques de la théorie des richesses(Researches into the Mathematical Principles of the Theory of Wealth). In 1883,Joseph Bertrandcritiqued Cournot's model as unrealistic, providing an alternative model of price competition[6]which would later be formalized byFrancis Ysidro Edgeworth.[7]
In 1913,Ernst ZermelopublishedÜber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels(On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy isstrictly determined.[8]
The work ofJohn von Neumannestablished game theory as its own independent field in the early-to-mid 20th century, with von Neumann publishing his paperOn the Theory of Games of Strategyin 1928.[9][10]Von Neumann's original proof usedBrouwer's fixed-point theoremon continuousmappingsinto compactconvex sets, which became a standard method in game theory andmathematical economics. Von Neumann's work in game theory culminated in his 1944 bookTheory of Games and Economic Behavior, co-authored withOskar Morgenstern.[11]The second edition of this book provided anaxiomatic theory of utility, which reincarnatedDaniel Bernoulli'sold theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily oncooperative gametheory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.[12]
In his 1938 bookApplications aux Jeux de Hasardand earlier notes,Émile Borelproved aminimax theoremfor two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English asBlotto game). Borel conjectured the non-existence of mixed-strategy equilibria infinite two-person zero-sum games, a conjecture that was proved false by von Neumann.[13]
In 1950,John Nashdeveloped a criterion for mutual consistency of players' strategies known as theNash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum)non-cooperative gamehas what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of thecore, theextensive form game,fictitious play,repeated games, and theShapley valuewere developed. The 1950s also saw the first applications of game theory tophilosophyandpolitical science. The first mathematical discussion of theprisoner's dilemmaappeared, and an experiment was undertaken by mathematiciansMerrill M. FloodandMelvin Dresher, as part of theRAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to globalnuclear strategy.[14]
In 1965,Reinhard Seltenintroduced hissolution conceptofsubgame perfect equilibria, which further refined the Nash equilibrium. Later he would introducetrembling hand perfectionas well. In 1994 Nash, Selten andHarsanyibecameEconomics Nobel Laureatesfor their contributions to economic game theory.
In the 1970s, game theory was extensively applied inbiology, largely as a result of the work ofJohn Maynard Smithand hisevolutionarily stable strategy. In addition, the concepts ofcorrelated equilibrium,trembling hand perfectionandcommon knowledge[a]were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept fornon-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theoristsThomas SchellingandRobert Aumannfollowed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples ofevolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007,Leonid Hurwicz,Eric Maskin, andRoger Myersonwere awarded the Nobel Prize in Economics "for having laid the foundations ofmechanism designtheory". Myerson's contributions include the notion ofproper equilibrium, and an important graduate text:Game Theory, Analysis of Conflict.[1]Hurwicz introduced and formalized the concept ofincentive compatibility.
In 2012,Alvin E. RothandLloyd S. Shapleywere awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theoristJean Tirole.
A game iscooperativeif the players are able to form binding commitments externally enforced (e.g. throughcontract law). A game isnon-cooperativeif players cannot form alliances or if all agreements need to beself-enforcing(e.g. throughcredible threats).[15]
Cooperative games are often analyzed through the framework ofcooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different fromnon-cooperative game theorywhich focuses on predicting individual players' actions and payoffs by analyzingNash equilibria.[16][17]
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player.[18]Many of the commonly studied 2×2 games are symmetric. The standard representations ofchicken, the prisoner's dilemma, and thestag huntare all symmetric games.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, theultimatum gameand similarly thedictator gamehave different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others).[19]Pokerexemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games includematching penniesand most classical board games includingGoandchess.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because theoutcomehas net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore,constant-sum gamescorrespond to activities like theft and gambling, but not to the fundamental economic situation in which there are potentialgains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
Simultaneous gamesare games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making themeffectivelysimultaneous).Sequential games(a type of dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players.[20]This need not beperfect informationabout every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often,normal formis used to represent simultaneous games, whileextensive formis used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; seesubgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game.[21]Examples of perfect-information games includetic-tac-toe,checkers,chess, andGo.[22][23][24]
Many card games are games of imperfect information, such aspokerandbridge.[25]Perfect information is often confused withcomplete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay.[26]Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players.[27]Games ofincomplete informationcan be reduced, however, to games of imperfect information by introducing "moves by nature".[28]
One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character.[29]
Bayesian gamemeans a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist.[30]
For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess andGo. Games that involveimperfect informationmay also have a strong combinatorial character, for instancebackgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions.[31]
Games of perfect information have been studied incombinatorial game theory, which has developed novel representations, e.g.surreal numbers, as well ascombinatorialandalgebraic(andsometimes non-constructive) proof methods tosolve gamesof certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.[32][33]A typical game that has been solved this way isHex. A related field of study, drawing fromcomputational complexity theory, isgame complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.[34]
Research inartificial intelligencehas addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, likealpha–beta pruningor use ofartificial neural networkstrained byreinforcement learning, which make games more tractable in computing practice.[31][35]
Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however.Continuous gamesallow players to choose a strategy from a continuous strategy set. For instance,Cournot competitionis typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games such as the continuouspursuit and evasion gameare continuous games where the evolution of the players' state variables is governed bydifferential equations. The problem of finding an optimal strategy in a differential game is closely related to theoptimal controltheory. In particular, there are two types of strategies: the open-loop strategies are found using thePontryagin maximum principlewhile the closed-loop strategies are found usingBellman's Dynamic Programmingmethod.
A particular case of differential games are the games with a randomtime horizon.[36]In such games, the terminal time is a random variable with a givenprobability distributionfunction. Therefore, the players maximize themathematical expectationof the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted.[37]In general, the evolution of strategies over time according to such rules is modeled as aMarkov chainwith a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest.
In biology, such models can representevolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.[38]
Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines ofdecision theory,operations research, and areas ofartificial intelligence, particularlyAI planning(with uncertainty) andmulti-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. usingMarkov decision processes(MDP).[39]
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature").[40]This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and theminimax solutionis that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.[41](SeeBlack swan theoryfor more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observablestochastic game(POSG), but few realistic problems are computationally feasible in POSG representation.[41]
These are games the play of which is the development of the rules for another game, the target or subject game.Metagamesseek to maximize the utility value of the rule set developed. The theory of metagames is related tomechanism designtheory.
The termmetagame analysisis also used to refer to a practical approach developed by Nigel Howard,[42]whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation ofconfrontation analysis.
Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature byBoyan JovanovicandRobert W. Rosenthal, in the engineering literature byPeter E. Caines, and by mathematiciansPierre-Louis Lionsand Jean-Michel Lasry.
The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: theplayersof the game, theinformationandactionsavailable to each player at each decision point, and thepayoffsfor each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".)[43][44][45][46]A game theorist typically uses these elements, along with asolution conceptof their choosing, to deduce a set of equilibriumstrategiesfor each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine anequilibriumto the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualized using gametrees(as pictured here). Here eachvertex(or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of adecision tree.[47]To solve any extensive form game,backward inductionmust be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.[48]
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information),Player 1"moves" first by choosing eitherForU(fair or unfair). Next in the sequence,Player 2, who has now observedPlayer 1's move, can choose to play eitherAorR(accept or reject). OncePlayer 2has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose thatPlayer 1choosesUand thenPlayer 2choosesA:Player 1then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) andPlayer 2gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in theimperfect information section.)
The normal (or strategic form) game is usually represented by amatrixwhich shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 playsUpand that Player 2 playsLeft. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.[49]
In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book.[50]
Formally, a characteristic function is a functionv:2N→R{\displaystyle v:2^{N}\to \mathbb {R} }[51]from the set of all possible coalitions of players to a set of payments, and also satisfiesv(∅)=0{\displaystyle v(\emptyset )=0}. The function describes how much collective payoff a set of players can gain by forming a coalition.
Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research.[52]In addition to classical game representations, some of the alternative representations also encode time related aspects.
As a method ofapplied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed ineconomicsto understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was byAntoine Augustin Cournotin 1838 with his solution of theCournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.[67]
Although pre-twentieth-centurynaturalistssuch asCharles Darwinmade game-theoretic kinds of statements, the use of game-theoretic analysis in biology began withRonald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 bookEvolution and the Theory of Games.[68]
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and toprescribesuch behavior.[69]Ineconomics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in thephilosophy of languageandphilosophy of science.[70]Game-theoretic arguments of this type can be found as far back asPlato.[71]An alternative version of game theory, calledchemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules".[72]Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
The primary use of game theory is to describe andmodelhow human populations behave.[citation needed]Some[who?]scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used inphysics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientificidealakin to the models used byphysicists. However, empirical work has shown that in some classic games, such as thecentipede game,guess 2/3 of the averagegame, and thedictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.[b]
Some game theorists, following the work of John Maynard Smith andGeorge R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality orbounded rationalityon the part of players. Despite the name, evolutionary game theory does not necessarily presumenatural selectionin the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example,fictitious playdynamics).
Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one'sbest responseto the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.[74]
Game theory is a major method used in mathematical economics and business formodelingcompeting behaviors of interactingagents.[c][75][76][77]Applications include a wide array of economic phenomena and approaches, such asauctions,bargaining,mergers and acquisitionspricing,[78]fair division,duopolies,oligopolies,social networkformation,agent-based computational economics,[79][80]general equilibrium, mechanism design,[81][82][83][84][85]andvoting systems;[86]and across such broad areas as experimental economics,[87][88][89][90][91]behavioral economics,[92][93][94][95][96][97]information economics,[43][44][45][46]industrial organization,[98][99][100][101]andpolitical economy.[102][103][104][45]
This research usually focuses on particular sets of strategies known as"solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.[105][106]
The payoffs of the game are generally taken to represent theutilityof individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above):descriptiveandprescriptive.[69]
Game theory also has an extensive use in a specific branch or stream of economics –Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms.[107]For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimalpricing strategybased on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics.
TheChartered Institute of Procurement & Supply(CIPS) promotes knowledge and use of game theory within the context of businessprocurement.[108]CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory amongprocurementprofessionals. Some of the main findings in their third annual survey (2019) include:
Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory.
Piraveenan (2019)[110]in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan[110]summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
The application of game theory topolitical scienceis focused in the overlapping areas offair division,political economy,public choice,war bargaining,positive political theory, andsocial choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.[111]
Early examples of game theory applied to political science are provided byAnthony Downs. In his 1957 bookAn Economic Theory of Democracy,[112]he applies theHotelling firm location modelto the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to theCuban Missile Crisisduring the presidency of John F. Kennedy.[113]
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime.[114]Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.[citation needed]
A game-theoretic explanation fordemocratic peaceis that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.[115]
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.[116]
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reducegreenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.[117]
Game theory has been used extensively to model decision-making scenarios relevant to defence applications.[118]Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare.[118]Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels.
The tool,[119]for example, automates the transformation of public vulnerability data into models, allowing defenders to synthesize optimal defence strategies through Stackelberg equilibrium analysis. This approach enhances cyber resilience by enabling defenders to anticipate and counteract attackers’ best responses, making game theory increasingly relevant in adversarial cybersecurity environments.
Ho et al. provide a broad summary of game theory applications in defence, highlighting its advantages and limitations across both physical and cyber domains.
Unlike those in economics, the payoffs for games inbiologyare often interpreted as corresponding tofitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as theevolutionarily stable strategy(ESS), first introduced in (Maynard Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence ofanimal communication.[120]The analysis ofsignaling gamesandother communication gameshas provided insight into the evolution of communication among animals. For example, themobbing behaviorof many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (seePaul Ormerod'sButterfly Economics).
Biologists have used thegame of chickento analyze fighting behavior and territoriality.[121]
According to Maynard Smith, in the preface toEvolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[122]
One such phenomenon is known asbiological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, tovervet monkeysthat warn group members of a predator's approach, even when it endangers that individual's chance of survival.[123]All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea ofkin selection. Altruists discriminate between the individuals they help and favor relatives.Hamilton's ruleexplains the evolutionary rationale behind this selection with the equationc < b × r, where the costcto the altruist must be less than the benefitbto the recipient multiplied by the coefficient of relatednessr. The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of1⁄2, because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.[123]The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was1⁄2in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Game theory has come to play an increasingly important role inlogicand incomputer science. Several logical theories have a basis ingame semantics. In addition, computer scientists have used games to modelinteractive computations. Also, game theory provides a theoretical basis to the field ofmulti-agent systems.[124]
Separately, game theory has played a role inonline algorithms; in particular, thek-server problem, which has in the past been referred to asgames with moving costsandrequest-answer games.[125]Yao's principleis a game-theoretic technique for provinglower boundson thecomputational complexityofrandomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets.Algorithmic game theory[85]and within italgorithmic mechanism design[84]combine computationalalgorithm designand analysis ofcomplex systemswith economic theory.[126][127][128]
Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment.[129]Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning,[130]mechanism design etc.[131]By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively.[132]
Game theory has been put to several uses inphilosophy. Responding to two papers byW.V.O. Quine(1960,1967),Lewis (1969)used game theory to develop a philosophical account ofconvention. In so doing, he provided the first analysis ofcommon knowledgeand employed it in analyzing play incoordination games. In addition, he first suggested that one can understandmeaningin terms ofsignaling games. This later suggestion has been pursued by several philosophers since Lewis.[133][134]FollowingLewis (1969)game-theoretic account of conventions, Edna Ullmann-Margalit (1977) andBicchieri(2006) have developed theories ofsocial normsthat define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[135][136]
Game theory has also challenged philosophers to think in terms of interactiveepistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993),[137][138]Skyrms(1990),[139]andStalnaker(1999).[140]
The synthesis of game theory withethicswas championed byR. B. Braithwaite.[141]The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent.[142]
Inethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton)[who?]authors have attempted to pursueThomas Hobbes' project of deriving morality from self-interest. Since games like theprisoner's dilemmapresent an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the generalsocial contractview inpolitical philosophy(for examples, seeGauthier (1986)andKavka (1986)).[d]
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma,stag hunt, and theNash bargaining gameas providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996,2004) and Sober and Wilson (1998)).
Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society.[143][144]
William Poundstonedescribed the game in his 1993 book Prisoner's Dilemma:[145]
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
Thedominant strategy(and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with thesure-thing principle.[146]However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal.
The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.[147]
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.[148]
The ultimatum game is a game that has become a popular instrument ofeconomic experiments. An early description is by Nobel laureateJohn Harsanyiin 1961.[149]
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.[150]
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995.[151]
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.[152]
The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, wheremarginal costcan be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price.[21]For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit.[21]However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand.[153]The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output.[21]Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved.
The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices.[21]The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage.[154]
Lists
|
https://en.wikipedia.org/wiki/Game_theory
|
Inevolutionary computation, ahuman-based genetic algorithm(HBGA) is agenetic algorithmthat allows humans to contribute solution suggestions to the evolutionary process. For this purpose, a HBGA has human interfaces for initialization, mutation, and recombinant crossover. As well, it may have interfaces for selective evaluation. In short, a HBGA outsources the operations of a typical genetic algorithm to humans.
Among evolutionary genetic systems, HBGA is the computer-based analogue of genetic engineering (Allan, 2005).
This table compares systems on lines of human agency:
One obvious pattern in the table is the division between organic (top) and computer systems (bottom).
Another is the vertical symmetry between autonomous systems (top and bottom) and human-interactive systems (middle).
Looking to the right, theselectoris the agent that decides fitness in the system.
It determines which variations will reproduce and contribute to the next generation.
In natural populations, and in genetic algorithms, these decisions are automatic; whereas in typical HBGA systems, they are made by people.
Theinnovatoris the agent of genetic change.
The innovator mutates and recombines the genetic material, to produce the variations on which the selector operates.
In most organic and computer-based systems (top and bottom), innovation is automatic, operating without human intervention.
In HBGA, the innovators are people.
HBGA is roughly similar to genetic engineering.
In both systems, the innovators and selectors are people.
The main difference lies in the genetic material they work with: electronic data vs. polynucleotide sequences.
The HBGA methodology was derived in 1999-2000 from analysis of the Free Knowledge Exchange project that was launched in the summer of 1998, in Russia (Kosorukoff, 1999). Human innovation and evaluation were used in support of collaborative problem solving. Users were also free to choose the next genetic operation to perform. Currently, several other projects implement the same model, the most popular beingYahoo! Answers, launched in December 2005.
Recent research suggests that human-based innovation operators are advantageous not only where it is hard to design an efficient computational mutation and/or crossover (e.g. when evolving solutions in natural language), but also in the case where good computational innovation operators are readily available, e.g. when evolving an abstract picture or colors (Cheng and Kosorukoff, 2004). In the latter case, human and computational innovation can complement each other, producing cooperative results and improving general user experience by ensuring that spontaneous creativity of users will not be lost.
Furthermore, human-based genetic algorithms prove to be a successful measure to counteract fatigue effects introduced byinteractive genetic algorithms.[1]
|
https://en.wikipedia.org/wiki/Human-based_genetic_algorithm
|
Hybrid intelligent systemdenotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as:
From thecognitive scienceperspective, every natural intelligent system is hybrid because it performs mental operations on both the symbolic and subsymbolic levels. For the past few years, there has been an increasing discussion of the importance ofA.I. Systems Integration. Based on notions that there have already been created simple and specificAIsystems (such as systems forcomputer vision,speech synthesis, etc., or software that employs some of the models mentioned above) and now is the time for integration to create broadAIsystems. Proponents of this approach are researchers such asMarvin Minsky,Ron Sun,Aaron Sloman,Angelo DalliandMichael A. Arbib.
An example hybrid is ahierarchical control systemin which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performingplanning.
Intelligent systems usually rely on hybrid reasoning processes, which includeinduction,deduction,abductionand reasoning byanalogy.
|
https://en.wikipedia.org/wiki/Hybrid_intelligent_system
|
TheKnowledge Query and Manipulation Language, orKQML, is a language
and protocol for communication among software agents andknowledge-based systems.[1]It was
developed in the early 1990s as part of theDARPAknowledge Sharing Effort, which was aimed at developing techniques for building large-scale knowledge bases which are
share-able and re-usable. While originally conceived of as an interface to knowledge based systems, it was soon repurposed as anAgent communication language.[2][3]
Work on KQML was led byTim Fininof theUniversity of Maryland, Baltimore Countyand Jay Weber of EITech and involved contributions from many researchers.
The KQML message format and protocol can be used to interact with an intelligent system, either by anapplication program, or by another intelligent system. KQML's "performatives" are operations that agents perform on each other's knowledge and goal stores. Higher-level interactions such ascontract netsand negotiation are built using these. KQML's "communication facilitators" coordinate the interactions of otheragentsto supportknowledge sharing.
Experimental prototype systems support concurrent engineering, intelligent design, intelligent planning, and scheduling.
KQML is superseded byFIPA-ACL.
|
https://en.wikipedia.org/wiki/Knowledge_Query_and_Manipulation_Language
|
Microbial intelligence(known asbacterial intelligence) is theintelligenceshown bymicroorganisms. This includes complex adaptive behavior shown bysingle cells, andaltruisticorcooperative behaviorin populations of like or unlike cells. It is often mediated by chemical signalling that induces physiological or behavioral changes in cells and influences colony structures.[1]
Complex cells, likeprotozoaoralgae, show remarkable abilities to organize themselves in changing circumstances.[2]Shell-building by amoebae reveals complex discrimination and manipulative skills that are ordinarily thought to occur only in multicellular organisms.
Even bacteria can display more behavior as a population. These behaviors occur in single species populations, or mixed species populations. Examples are colonies or swarms ofmyxobacteria,quorum sensing, andbiofilms.[1][3]
It has been suggested that a bacterial colony loosely mimics a biologicalneural network. The bacteria can take inputs in form of chemical signals, process them and then produce output chemicals to signal other bacteria in the colony.
Bacteria communication and self-organization in the context ofnetwork theoryhas been investigated byEshel Ben-Jacobresearch group atTel Aviv Universitywhich developed afractalmodel of bacterial colony and identified linguistic and social patterns in colony lifecycle.[4]
Bacterial colony optimizationis analgorithmused inevolutionary computing. The algorithm is based on a lifecycle model that simulates some typical behaviors ofE. colibacteria during their whole lifecycle, including chemotaxis, communication, elimination, reproduction, and migration.
Logical circuits can be built with slime moulds.[17]Distributed systems experiments have used them to approximate motorway graphs.[18]The slime mouldPhysarum polycephalumis able to solve theTraveling Salesman Problem, a combinatorial test with exponentially increasing complexity, inlinear time.[19]
Microbial community intelligence is found insoil ecosystemsin the form of interacting adaptive behaviors and metabolisms.[20]According to Ferreira et al., "Soil microbiota has its own unique capacity to recover from change and to adapt to the present state[...] [This] capacity to recover from change and to adapt to the present state by altruistic, cooperative and co-occurring behavior is considered a key attribute of microbial community intelligence."[21]
Many bacteria that exhibit complex behaviors or coordination are heavily present in soil in the form of biofilms.[1]Micropredators that inhabit soil, including social predatory bacteria, have significant implications for its ecology. Soil biodiversity, managed in part by these micropredators, is of significant importance for carbon cycling and ecosystem functioning.[22]
The complicated interaction of microbes in the soil has been proposed as a potentialcarbon sink.Bioaugmentationhas been suggested as a method to increase the 'intelligence' of microbial communities, that is, adding the genomes ofautotrophic,carbon-fixingornitrogen-fixingbacteria to theirmetagenome.[20]
Bacterial transformation is a form of microbobial intelligence that involves complex adaptive cooperative behavior. About 80 species of bacteria have so far been identified that are likely capable of transformation, including about equal numbers of Gram-positive and Gram-negative bacteria.[23]
V. choleraehas the ability to communicate strongly at the cellular level for the purpose of bacterial transformation, and this form of microbial intelligence involves cooperative quorum-sensing.[24][25]Two different stimuli that are encountered in the small intestine, the absence of oxygen and the presence of host-producedbile salts, stimulateV. choleraequorum sensing and thus its pathogenicity.[26]Cooperative quorum sensing, involving microbial intelligence, facilitates naturalgenetic transformation, a process in which extracellular DNA is taken up by (competent)V. choleraecells.[27]V. choleraeis a bacterial pathogen that causescholerawith severe contagious diarrhea that affects millions of people globally.
S. pneumoniaeuses a cooperative complex quorum sensing system, a form of microbial intelligence, for regulating the release ofbacteriocinsas well as for differentiating into thecompetent statenecessary for naturalgenetic transformation.[28]The competent state is induced by a peptidepheromone.[29]The induction of competence results in the release ofDNAfrom a sub-fraction ofS. pneumoniaecells in the population, probably by cell lysis. Subsequently the majority of theS. pneumoniaecells that have been induced to competence act as recipients and take up the DNA that is released by the donors.[29]Natural transformation inS. pneumoniaeis an adaptive form of microbial intelligence for promotinggenetic recombinationthat appears to be similar tosexin higher organisms.[29]S. pneumoniaeis responsible for the death of more than a million people yearly.[30]
|
https://en.wikipedia.org/wiki/Microbial_intelligence
|
Incomputer sciencemulti-agent planninginvolves coordinating the resources and activities of multipleagents.
NASAsays, "multiagent planning is concerned withplanningby (and for) multiple agents. It can involve agents planning for a common goal, an agent coordinating the plans (plan merging) or planning of others, or agents refining their own plans while negotiating over tasks or resources. The topic also involves how agents can do this in real time while executing plans (distributed continual planning). Multiagent scheduling differs from multiagent planning the same way planning and scheduling differ: in scheduling often the tasks that need to be performed are already decided, and in practice, scheduling tends to focus on algorithms for specific problem domains".[1]
This organization-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Multi-agent_planning
|
Multi-agent reinforcement learning (MARL)is a sub-field ofreinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment.[1]Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complexgroup dynamics.
Multi-agent reinforcement learning is closely related togame theoryand especiallyrepeated games, as well asmulti-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation,[2]reciprocity,[3]equity,[4]social influence,[5]language[6]and discrimination.[7]
Similarly tosingle-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of aMarkov decision process (MDP). Fix a set of agentsI={1,...,N}{\displaystyle I=\{1,...,N\}}. We then define:
In settings withperfect information, such as the games ofchessandGo, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications likeself-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observablestochastic gamein the general case, and thedecentralized POMDPin the cooperative case.
When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior:
When two agents are playing azero-sum game, they are in pure competition with each other. Many traditional games such aschessandGofall under this category, as do two-player variants of video games likeStarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There is no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent.
TheDeep Blue[8]andAlphaGoprojects demonstrate how to optimize the performance of agents in pure competition settings.
One complexity that is not stripped away in pure competition settings isautocurricula. As the agents' policy is improved usingself-play, multiple layers of learning may occur.
MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreationalcooperative gamessuch asOvercooked,[9]as well as real-world scenarios inrobotics.[10]
In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur.
In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language[11]and also alluded to in more general multi-agent collaborative tasks.[12][13][14][15]
Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multipleself-driving carsare planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding atraffic collision.[17]
Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them.
Mixed-sum settings can be explored using classicmatrix gamessuch asprisoner's dilemma, more complexsequential social dilemmas, and recreational games such asAmong Us,[18]Diplomacy[19]andStarCraft II.[20][21]
Mixed-sum settings can give rise to communication and social dilemmas.
As ingame theory, much of the research in MARL revolves aroundsocial dilemmas, such asprisoner's dilemma,[22]chickenandstag hunt.[23]
While game theory research might focus onNash equilibriaand what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. Thereinforcement learningalgorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research.[24]
Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules,[25]adding intrinsic rewards,[4]and more.
Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took.
In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of asequential social dilemma (SSD)was introduced in 2017[26]as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them.[27]
An autocurriculum[28](plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings,[29]where each group of agents is racing to counter the current strategy of the opposing group.
TheHide and Seek gameis an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting aglitchin the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor.
Autocurricula in reinforcement learning experiments are compared to the stages of theevolution of life on Earthand the development ofhuman culture. A major stage in evolution happened 2-3 billion years ago, whenphotosynthesizing life formsstarted to produce massive amounts ofoxygen, changing the balance of gases in the atmosphere.[30]In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to landmammalsand human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through theIndustrial Revolutionin the 18th century without the resources and insights gained by theagricultural revolutionat around 10,000 BC.[31]
Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry:
Multi-agent reinforcement learning has been used in research intoAI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts.[45][46]
There are some inherent difficulties about multi-agentdeep reinforcement learning.[47]The environment is not stationary anymore, thus theMarkov propertyis violated: transitions and rewards do not only depend on the current state of an agent.
|
https://en.wikipedia.org/wiki/Multi-agent_reinforcement_learning
|
Pattern-oriented modeling(POM) is an approach to bottom-upcomplex systemsanalysisthat was developed tomodelcomplex ecological andagent-basedsystems. A goal of POM is to make ecological modeling more rigorous and comprehensive.[1]
A traditional ecosystem model attempts to approximate the real system as closely as possible. POM proponents posit that an ecosystem is so information-rich that an ecosystem model will inevitably either leave out relevant information or become over-parameterized and lose predictive power.[2]Through a focus on only the relevant patterns in the real system, POM offers a meaningful alternative to the traditional approach.
An attempt to mimic thescientific method, POM requires the researcher to begin with a pattern found in the real system, posit hypotheses to explain the pattern, and then develop predictions that can be tested. A model used to determine the original pattern may not be used to test the researcher's predictions. Through this focus on the pattern, the model can be constructed to include only information relevant to the question at hand.[3]
POM is also characterized by an effort to identify the appropriatetemporalandspatial scaleat which to study a pattern, and to avoid the assumption that a single process might explain a pattern at multiple temporal or spatial scales. It does, however, offer the opportunity to look explicitly at how processes at multiple scales might be driving a particular pattern.[2]
A look at the trade-offs between model complexity and payoff can be considered in the framework of theMedawar zone. The model is considered too simple if it addresses a single problem (e.g., the explanation behind a single pattern), whereas it will be considered too complex if it incorporates all the available biological data. The Medawar zone, where the payoff in what is learned is greatest, is at an intermediate level of model complexity.
Pattern-oriented modeling has been used to testa priorihypotheses on how herdsman decide which farmers to contract with when grazing their cattle. Herdsman behavior followed the pattern predicted by a 'friend' rather than 'cost' priority hypothesis.[2]
|
https://en.wikipedia.org/wiki/Pattern-oriented_modeling
|
PlatBox Project, formally known asBoxed Economy Project, is amulti-agentbasedcomputer simulationsoftware developmentproject founded by Iba Laboratory atKeio University, Japan. The main work of PlatBox Project is to develop PlatBox Simulator and Component Builder, which are claimed to be the first multi-agent computer simulation software that do not require end-users to have any computer programming skill in order to create and execute multi-agent computer simulation models. Currently, the project is organized by Takashi Iba, assistant professor from Keio University, and Nozomu Aoyama. PlatBox Simulator and Component Builder are currently offered only in Japanese; however, the English version is expected to be out anytime soon.
PlatBox Simulator is a multi-agent based simulationplatformdeveloped by PlatBox Project.
ComponentBuilder is a multi-agent based simulation modeling tool developed by PlatBox Project.
|
https://en.wikipedia.org/wiki/PlatBox_Project
|
Reinforcement learning(RL) is an interdisciplinary area ofmachine learningandoptimal controlconcerned with how anintelligent agentshouldtake actionsin a dynamic environment in order tomaximize a rewardsignal. Reinforcement learning is one of thethree basic machine learning paradigms, alongsidesupervised learningandunsupervised learning.
Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed).[1]The search for this balance is known as theexploration–exploitation dilemma.
The environment is typically stated in the form of aMarkov decision process(MDP), as many reinforcement learning algorithms usedynamic programmingtechniques.[2]The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large MDPs where exact methods become infeasible.[3]
Due to its generality, reinforcement learning is studied in many disciplines, such asgame theory,control theory,operations research,information theory,simulation-based optimization,multi-agent systems,swarm intelligence, andstatistics. In the operations research and control literature, RL is calledapproximate dynamic programming, orneuro-dynamic programming.The problems of interest in RL have also been studied in thetheory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment).
Basic reinforcement learning is modeled as aMarkov decision process:
The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar toprocessesthat appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning.[4][5]
A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time stept, the agent receives the current stateSt{\displaystyle S_{t}}and rewardRt{\displaystyle R_{t}}. It then chooses an actionAt{\displaystyle A_{t}}from the set of available actions, which is subsequently sent to the environment. The environment moves to a new stateSt+1{\displaystyle S_{t+1}}and the rewardRt+1{\displaystyle R_{t+1}}associated with thetransition(St,At,St+1){\displaystyle (S_{t},A_{t},S_{t+1})}is determined. The goal of a reinforcement learning agent is to learn apolicy:
π:S×A→[0,1]{\displaystyle \pi :{\mathcal {S}}\times {\mathcal {A}}\rightarrow [0,1]},π(s,a)=Pr(At=a∣St=s){\displaystyle \pi (s,a)=\Pr(A_{t}=a\mid S_{t}=s)}
that maximizes the expected cumulative reward.
Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to havefull observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to havepartial observability, and formally the problem must be formulated as apartially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed.
When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion ofregret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative.
Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, includingenergy storage,[6]robot control,[7]photovoltaic generators,[8]backgammon,checkers,[9]Go(AlphaGo), andautonomous driving systems.[10]
Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use offunction approximationto deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations:
The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems tomachine learningproblems.
The exploration vs. exploitation trade-off has been most thoroughly studied through themulti-armed banditproblem and for finite state space Markov decision processes in Burnetas and Katehakis (1997).[12]
Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.
One such method isε{\displaystyle \varepsilon }-greedy, where0<ε<1{\displaystyle 0<\varepsilon <1}is a parameter controlling the amount of exploration vs. exploitation. With probability1−ε{\displaystyle 1-\varepsilon }, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probabilityε{\displaystyle \varepsilon }, exploration is chosen, and the action is chosen uniformly at random.ε{\displaystyle \varepsilon }is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.[13]
Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.
The agent's action selection is modeled as a map calledpolicy:
The policy map gives the probability of taking actiona{\displaystyle a}when in states{\displaystyle s}.[14]: 61There are also deterministic policiesπ{\displaystyle \pi }for whichπ(s){\displaystyle \pi (s)}denotes the action that should be played at states{\displaystyle s}.
The state-value functionVπ(s){\displaystyle V_{\pi }(s)}is defined as,expected discounted returnstarting with states{\displaystyle s}, i.e.S0=s{\displaystyle S_{0}=s}, and successively following policyπ{\displaystyle \pi }. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.[14]: 60
where the random variableG{\displaystyle G}denotes thediscounted return, and is defined as the sum of future discounted rewards:
whereRt+1{\displaystyle R_{t+1}}is the reward for transitioning from stateSt{\displaystyle S_{t}}toSt+1{\displaystyle S_{t+1}},0≤γ<1{\displaystyle 0\leq \gamma <1}is thediscount rate.γ{\displaystyle \gamma }is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future.
The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-calledstationarypolicies. A policy isstationaryif the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted todeterministicstationary policies. Adeterministic stationarypolicy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.
Thebrute forceapproach entails two steps:
One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy.
These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this arevalue function estimationanddirect policy search.
Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returnsE[G]{\displaystyle \operatorname {\mathbb {E} } [G]}for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).
These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return fromanyinitial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies.
To define optimality in a formal manner, define the state-value of a policyπ{\displaystyle \pi }by
whereG{\displaystyle G}stands for the discounted return associated with followingπ{\displaystyle \pi }from the initial states{\displaystyle s}. DefiningV∗(s){\displaystyle V^{*}(s)}as the maximum possible state-value ofVπ(s){\displaystyle V^{\pi }(s)}, whereπ{\displaystyle \pi }is allowed to change,
A policy that achieves these optimal state-values in each state is calledoptimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, sinceV∗(s)=maxπE[G∣s,π]{\displaystyle V^{*}(s)=\max _{\pi }\mathbb {E} [G\mid s,\pi ]}, wheres{\displaystyle s}is a state randomly sampled from the distributionμ{\displaystyle \mu }of initial states (soμ(s)=Pr(S0=s){\displaystyle \mu (s)=\Pr(S_{0}=s)}).
Although state-values suffice to define optimality, it is useful to define action-values. Given a states{\displaystyle s}, an actiona{\displaystyle a}and a policyπ{\displaystyle \pi }, the action-value of the pair(s,a){\displaystyle (s,a)}underπ{\displaystyle \pi }is defined by
whereG{\displaystyle G}now stands for the random discounted return associated with first taking actiona{\displaystyle a}in states{\displaystyle s}and followingπ{\displaystyle \pi }, thereafter.
The theory of Markov decision processes states that ifπ∗{\displaystyle \pi ^{*}}is an optimal policy, we act optimally (take the optimal action) by choosing the action fromQπ∗(s,⋅){\displaystyle Q^{\pi ^{*}}(s,\cdot )}with the highest action-value at each state,s{\displaystyle s}. Theaction-value functionof such an optimal policy (Qπ∗{\displaystyle Q^{\pi ^{*}}}) is called theoptimal action-value functionand is commonly denoted byQ∗{\displaystyle Q^{*}}. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.
Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function arevalue iterationandpolicy iteration. Both algorithms compute a sequence of functionsQk{\displaystyle Q_{k}}(k=0,1,2,…{\displaystyle k=0,1,2,\ldots }) that converge toQ∗{\displaystyle Q^{*}}. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.
Monte Carlo methods[15]are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual orsimulatedexperience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification oftransition probabilities, which is necessary fordynamic programmingmethods.
Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involvingrandom sampling; however, in this context, it specifically refers to methods that compute averages fromcompletereturns, rather thanpartialreturns.
These methods function similarly to thebandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problemnon-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computesvalue functionsusing full knowledge of theMarkov decision process(MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieveoptimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience.[14]
The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class ofgeneralized policy iterationalgorithms. Manyactor-criticmethodsbelong to this category.
The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton'stemporal difference(TD) methods that are based on the recursiveBellman equation.[16][17]The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method,[18]may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.
Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-calledλ{\displaystyle \lambda }parameter(0≤λ≤1){\displaystyle (0\leq \lambda \leq 1)}that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.
In order to address the fifth issue,function approximation methodsare used.Linear function approximationstarts with a mappingϕ{\displaystyle \phi }that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair(s,a){\displaystyle (s,a)}are obtained by linearly combining the components ofϕ(s,a){\displaystyle \phi (s,a)}with someweightsθ{\displaystyle \theta }:
The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas fromnonparametric statistics(which can be seen to construct their own features) have been explored.
Value iteration can also be used as a starting point, giving rise to theQ-learningalgorithm and its many variants.[19]Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems.[20]
The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency.
An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case ofstochastic optimization. The two approaches available are gradient-based and gradient-free methods.
Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vectorθ{\displaystyle \theta }, letπθ{\displaystyle \pi _{\theta }}denote the policy associated toθ{\displaystyle \theta }. Defining the performance function byρ(θ)=ρπθ{\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}}under mild conditions this function will be differentiable as a function of the parameter vectorθ{\displaystyle \theta }. If the gradient ofρ{\displaystyle \rho }was known, one could usegradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method[21](which is known as the likelihood ratio method in thesimulation-based optimizationliterature).[22]
A large class of methods avoids relying on gradient information. These includesimulated annealing,cross-entropy searchor methods ofevolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum.
Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years,actor–critic methodshave been proposed and performed well on various problems.[23]
Policy search methods have been used in theroboticscontext.[24]Many policy search methods may get stuck in local optima (as they are based onlocal search).
Finally, all of the above methods can be combined with algorithms that first learn a model of theMarkov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions.[25]Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm.[26]
Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt.[27]
There are other ways to use models than to update a value function.[28]For instance, inmodel predictive controlthe model is used to update the behavior directly.
Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.
Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997).[12]Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.
For incremental algorithms, asymptotic convergence issues have been settled.[clarification needed]Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).
Research topics include:
The following table lists the key algorithms for learning a policy depending on several criteria:
Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.[50]
This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space.[51]The work on learning ATARI games by GoogleDeepMindincreased attention todeep reinforcement learningorend-to-end reinforcement learning.[52]
Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations.[53][54][55]While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.[56]
By introducingfuzzy inferencein reinforcement learning,[57]approximating the state-action value function withfuzzy rulesin continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation[58]allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).
In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.[59]One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL).[60]MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL).[61]RU-IRL is based onrandom utility theoryand Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function.
Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments.[62][63]
Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes.[64]An alternative approach is risk-averse reinforcement learning, where instead of theexpectedreturn, arisk-measureof the return is optimized, such as theconditional value at risk(CVaR).[65]In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties.[66][67]However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias[68]and blindness to success.[69]
Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate rewardRa(s,s′){\displaystyle R_{a}(s,s')}after transition froms{\displaystyle s}tos′{\displaystyle s'}with actiona{\displaystyle a}. It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation.
The self-reinforcement algorithm updates a memory matrixW=||w(a,s)||{\displaystyle W=||w(a,s)||}such that in each iteration executes the following machine learning routine:
Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior).
Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA).[70][71]The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion.[72]
In recent years, Reinforcement learning has become a significant concept inNatural Language Processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label.
Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP.
A major breakthrough happened with the introduction ofReinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development ofInstructGPT, an effective language model trained to follow human instructions and later inChatGPTwhich incorporates RLHF for improving output responses and ensuring safety.
More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models.
Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other.[73]After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to bei.i.d, standard statistical tools can be used for hypothesis testing, such asT-testandpermutation test.[74]This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise.[75]
Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios.
RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance,OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay andcurriculum learninghave been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications.
Training RL models, particularly fordeep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable.
The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining.
Designing appropriate reward functions is critical in RL because poorly designedreward functionscan lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors.
|
https://en.wikipedia.org/wiki/Reinforcement_learning
|
Incomputer science, thescientific community metaphoris ametaphorused to aid understandingscientific communities. The first publications on the scientific community metaphor in 1981 and 1982[1]involved the development of aprogramming languagenamedEtherthat invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.
The scientific community metaphor builds on thephilosophy,historyandsociology of science. It was originally developed building on work in the philosophy of science byKarl PopperandImre Lakatos. In particular, it initially made use of Lakatos' work onproofs and refutations. Subsequently, development has been influenced by the work of Geof Bowker,Michel Callon,Paul Feyerabend, Elihu M. Gerson,Bruno Latour,John Law,Karl Popper,Susan Leigh Star,Anselm Strauss, andLucy Suchman.
In particular Latour'sScience in Actionhad great influence. In the book,Janusfigures make paradoxical statements about scientific development. An important challenge for the scientific community metaphor is to reconcile these paradoxical statements.
Scientific research depends critically on monotonicity, concurrency, commutativity, and pluralism to propose, modify, support, and oppose scientific methods, practices, and theories.
Quoting from Carl Hewitt,[1]scientific community metaphor systems have characteristics ofmonotonicity,concurrency,commutativity,pluralism,skepticismandprovenance.
The above characteristics are limited in real scientific communities. Publications are sometimes lost or difficult to retrieve. Concurrency is limited by resources including personnel and funding. Sometimes it is easier to rederive a result than to look it up. Scientists only have so much time and energy to read and try to understand the literature. Scientific fads sometimes sweep up almost everyone in a field. The order in which information is received can influence how it is processed. Sponsors can try to control scientific activities. In Ether the semantics of the kinds of activity described in this paragraph are governed by theactor model.
Scientific research includes generating theories and processes for modifying, supporting, and opposing these theories.Karl Poppercalled the process "conjectures and refutations", which although expressing a core insight, has been shown to be too restrictive a characterization by the work ofMichel Callon,Paul Feyerabend, Elihu M. Gerson,Mark Johnson,Thomas Kuhn,George Lakoff,Imre Lakatos,Bruno Latour,John Law,Susan Leigh Star,Anselm Strauss,Lucy Suchman,Ludwig Wittgenstein,etc.. Three basic kinds of participation in Ether are proposing, supporting, and opposing. Scientific communities are structured to support competition as well as cooperation.
These activities affect the adherence to approaches, theories, methods,etc.in scientific communities. Current adherence does not imply adherence for all future time. Later developments will modify and extend current understandings. Adherence is a local rather than a global phenomenon. No one speaks for the scientific community as a whole.
Opposing ideas may coexist in communities for centuries. On rare occasions a community reaches abreakthroughthat clearly decides an issue previously muddled.
Ether usedviewpointsto relativist information in publications. However a great deal of information is shared across viewpoints. So Ether made use ofinheritanceso that information in a viewpoint could be readily used in other viewpoints. Sometimes this inheritance is not exact as when the laws of physics inNewtonian mechanicsare derived from those ofSpecial Relativity. In such cases Ether usedtranslationinstead of inheritance.Bruno Latourhas analyzed translation in scientific communities in the context ofactor network theory.Imre Lakatosstudied very sophisticated kinds of translations of mathematical (e.g., theEulerformula forpolyhedra) and scientific theories.
Viewpoints were used to implement natural deduction (Fitch [1952]) in Ether. In order to prove a goal of the form(PimpliesQ)in a viewpointV, it is sufficient to create a new viewpointV'that inherits fromV, assertPinV', and then proveQinV'. An idea like this was originally introduced into programming language proving by Rulifson, Derksen, and Waldinger [1973] except since Ether is concurrent rather than being sequential it does not rely on being in a single viewpoint that can be sequentially pushed and popped to move to other viewpoints.
Ultimately resolving issues among these viewpoints are matters fornegotiation(as studied in the sociology and philosophy of science by Geof Bowker,Michel Callon,Paul Feyerabend, Elihu M. Gerson,Bruno Latour,John Law,Karl Popper, Susan Leigh Star, Anselm Strauss, Lucy Suchman, etc.).
Alan Turingwas one of the first to attempt to more precisely characterizeindividualintelligence through the notion of his famousTuring Test. This paradigm was developed and deepened in the field ofArtificial Intelligence.Allen NewellandHerbert A. Simondid pioneer work in analyzing the protocols of individual human problem solving behavior on puzzles. More recentlyMarvin Minskyhas developed the idea that the mind of an individual human is composed of a society of agents inSociety of Mind(see the analysis by Push Singh).
The above research on individual human problem solving iscomplementaryto the scientific community metaphor.
Some developments in hardware and software technology for theInternetare being applied in light of the scientific community metaphor.Hewitt 2006
Legal concerns (e.g.,HIPAA,Sarbanes-Oxley, "The Books and Records Rules" in SEC Rule 17a-3/4 and "Design Criteria Standard for Electronic Records Management Software Applications" in DOD 5015.2 in theU.S.) are leading organizations to store information monotonically forever. It has just now become less costly in many cases to store information onmagnetic diskthan on tape. With increasing storage capacity, sites can monotonically record what they read from the Internet as well as monotonically recording their own operations.
Search enginescurrently provide rudimentary access to all this information. Future systems will provideinteractive question answering broadly conceivedthat will make all this information much more useful.
Massiveconcurrency(i.e.,Web servicesandmulti-corecomputer architectures) lies in the future posing enormous challenges and opportunities for the scientific community metaphor. In particular, the scientific community metaphor is being used in clientcloud computing.[2]
|
https://en.wikipedia.org/wiki/Scientific_community_metaphor
|
Modular self-reconfiguring robotic systemsorself-reconfigurable modular robotsare autonomous kinematicmachineswith variable morphology. Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguringrobotsare also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage.
For example, a robot made of such components could assume aworm-like shape to move through a narrow pipe, reassemble into something withspider-like legs to cross uneven terrain, then form a third arbitrary object (like a ball or wheel that can spin itself) to move quickly over a fairly flat terrain; it can also be used for making "fixed" objects, such as walls, shelters, or buildings.
In some cases this involves each module having 2 or more connectors for connecting several together. They can containelectronics,sensors,computer processors,memoryandpower supplies; they can also containactuatorsthat are used for manipulating their location in the environment and in relation with each other. A feature found in some cases is the ability of the modules to automatically connect and disconnect themselves to and from each other, and to form into many objects or perform many tasks moving or manipulating the environment.
By saying "self-reconfiguring" or "self-reconfigurable" it means that the mechanism or device is capable of utilizing its own system of control such as with actuators orstochasticmeans to change its overall structural shape. Having the quality of being "modular" in "self-reconfiguring modular robotics" is to say that the same module or set of modules can be added to or removed from the system, as opposed to being generically "modularized" in the broader sense. The underlying intent is to have an indefinite number of identical modules, or a finite and relatively small set of identical modules, in a mesh or matrix structure of self-reconfigurable modules.
Self-reconfiguration is different from the concept ofself-replication, which is not a quality that a self-reconfigurable module or collection of modules needs to possess. A matrix of modules does not need to be able to increase the quantity of modules in its matrix to be considered self-reconfigurable. It is sufficient for self-reconfigurable modules to be produced at a conventional factory, where dedicated machines stamp or mold components that are thenassembledinto a module, and added to an existing matrix in order to supplement it to increase the quantity or to replace worn out modules.
A matrix made up of many modules can separate to form multiple matrices with fewer modules, or they can combine, or recombine, to form a larger matrix. Some advantages of separating into multiple matrices include the ability to tackle multiple and simpler tasks at locations that are remote from each other simultaneously, transferring through barriers with openings that are too small for a single larger matrix to fit through but not too small for smaller matrix fragments or individual modules, and energy saving purposes by only utilizing enough modules to accomplish a given task. Some advantages of combining multiple matrices into a single matrix is ability to form larger structures such as an elongated bridge, more complex structures such as a robot with many arms or an arm with more degrees of freedom, and increasing strength. Increasing strength, in this sense, can be in the form of increasing the rigidity of a fixed or static structure, increasing the net or collective amount of force for raising, lowering, pushing, or pulling another object, or another part of the matrix, or any combination of these features.
There are two basic methods of segment articulation that self-reconfigurable mechanisms can utilize to reshape their structures: chain reconfiguration and lattice reconfiguration.
Modular robots are usually composed of multiple building blocks of a relatively small repertoire, with uniform docking interfaces that allow transfer of mechanical forces and moments, electrical power and communication throughout the robot.
The modular building blocks usually consist of some primary structural actuated unit, and potentially additional specialized units such as grippers, feet, wheels, cameras, payload and energy storage and generation.
Modular self-reconfiguring robotic systems can be generally classified into several architectural groups by the geometric arrangement of their unit (lattice vs. chain). Several systems exhibit hybrid properties, and modular robots have also been classified into the two categories of Mobile Configuration Change (MCC) and Whole Body Locomotion (WBL).[1]
Modular robotic systems can also be classified according to the way by which units are reconfigured (moved) into place.
Modular robotic systems are also generally classified depending on the design of the modules.
Other modular robotic systems exist which are not self-reconfigurable, and thus do not formally belong to this family of robots though they may have similar appearance. For example,self-assemblingsystems may be composed of multiple modules but cannot dynamically control their target shape. Similarly, tensegrity robotics may be composed of multiple interchangeable modules but cannot self-reconfigure. Self-reconfigurable robotic systems feature reconfigurability compared to their fixed-morphology counterparts and it can be defined as the extent/degree to which a self-reconfigurable robot or robotic systems can transform and evolve to another meaningful configuration with a certain degree of autonomy or human intervention.[3]The reconfigurable system can also be classified according to the mechanism reconfigurability.
There are two key motivations for designing modular self-reconfiguring robotic systems.
Both these advantages have not yet been fully realized. A modular robot is likely to be inferior in performance to any single custom robot tailored for a specific task. However, the advantage of modular robotics is only apparent when considering multiple tasks that would normally require a set of different robots.
The added degrees of freedom make modular robots more versatile in their potential capabilities, but also incur a performance tradeoff and increased mechanical and computational complexities.
The quest for self-reconfiguring robotic structures is to some extent inspired by envisioned applications such as long-term space missions, that require long-term self-sustaining robotic ecology that can handle unforeseen situations and may require self repair. A second source of inspiration are biological systems that are self-constructed out of a relatively small repertoire of lower-level building blocks (cells or amino acids, depending on scale of interest). This architecture underlies biological systems' ability to physically adapt, grow, heal, and even self replicate – capabilities that would be desirable in many engineered systems.
While the system has the promise of being capable of doing a wide variety of things, finding the "killer application" has been somewhat elusive. Here are several examples:
One application that highlights the advantages of self-reconfigurable systems is long-term space missions.[4]These require long-term self-sustaining robotic ecology that can handle unforeseen situations and may require self repair. Self-reconfigurable systems have the ability to handle tasks that are not known a priori, especially compared to fixed configuration systems. In addition, space missions are highly volume- and mass-constrained. Sending a robot system that can reconfigure to achieve many tasks may be more effective than sending many robots that each can do one task.
Another example of an application has been coined "telepario" by CMU professors Todd Mowry and Seth Goldstein. What the researchers propose to make are moving, physical,
three-dimensional replicas of people or objects, so lifelike that human senses would accept them as real. This would eliminate the need for cumbersome virtual reality gear and overcome the viewing angle limitations of modern 3D approaches. The replicas would mimic the shape and appearance of a person or object being imaged in real time, and as the originals moved, so would their replicas. One aspect of this application is that the main development thrust is geometric representation rather than applying forces to the environment as in a typical robotic manipulation task. This project is widely known as claytronics[5]orProgrammable matter(noting that programmable matter is a much more general term, encompassing functional programmable materials, as well).
A third long-term vision for these systems has been called "bucket of stuff", which would be a container filled with modular robots that can accept user commands and adopt an appropriate form in order to complete household chores.[6][7]
The roots of the concept of modular self-reconfigurable robots can be traced back to the "quick change" end effector and automatic tool changers in computer numerical controlled machining centers in the 1970s. Here, special modules each with a common connection mechanism could be automatically swapped out on the end of a robotic arm. However, taking the basic concept of the common connection mechanism and applying it to the whole robot was introduced by Toshio Fukuda with the CEBOT (short for cellular robot) in the late 1980s.
The early 1990s saw further development fromGregory S. Chirikjian, Mark Yim, Joseph Michael, and Satoshi Murata. Chirikjian, Michael, and Murata developed lattice reconfiguration systems and Yim developed a chain based system. While these researchers started with a mechanical engineering emphasis, designing and building modules then developing code to program them, the work of Daniela Rus and Wei-min Shen developed hardware but had a greater impact on the programming aspects. They started a trend towards provable or verifiable distributed algorithms for the control of large numbers of modules.
One of the more interesting hardware platforms recently has been the MTRAN II and III systems developed by Satoshi Murata et al. This system is a hybrid chain and lattice system. It has the advantage of being able to achieve tasks more easily like chain systems, yet reconfigure like a lattice system.
More recently new efforts in stochastic self-assembly have been pursued byHod Lipsonand Eric Klavins. A large effort atCarnegie Mellon Universityheaded by Seth Goldstein and Todd Mowry has started looking at issues in developing millions of modules.
Many tasks have been shown to be achievable, especially with chain reconfiguration modules. This demonstrates the versatility of these systems however, the other two advantages, robustness and low cost have not been demonstrated. In general the prototype systems developed in the labs have been fragile and expensive as would be expected during any initial development.
There is a growing number of research groups actively involved in modular robotics research. To date, about 30 systems have been designed and constructed, some of which are shown below.
A chain self-reconfiguration system. Each module is about 50 mm on a side, and has 1 rotational DOF. It is part of the PolyBot modular robot family that has demonstrated many modes of locomotion including walking: biped, 14 legged, slinky-like, snake-like: concertina in a gopher hole, inchworm gaits, rectilinear undulation and sidewinding gaits, rolling like a tread at up to 1.4 m/s, riding a tricycle, climbing: stairs, poles pipes, ramps etc. More information can be found at the polybot webpage at PARC.[15]
A hybrid type self-reconfigurable system. Each module is two cube size (65 mm side), and has 2 rotational DOF and 6 flat surfaces for connection. It is the 3rd M-TRAN prototypes. Compared with the former (M-TRAN II), speed and reliability of connection is largely improved. As a chain type system, locomotion by CPG (Central Pattern Generator) controller in various shapes has been demonstrated by M-TRAN II. As a lattice type system, it can change its configuration, e.g., between a 4 legged walker to a caterpillar like robot. See the M-TRAN webpage at AIST.[16]
AMOEBA-I, a three-module reconfigurable mobile robot was developed in Shenyang Institute of Automation (SIA), Chinese Academy of Sciences (CAS) by Liu J G et al.[1][2].AMOEBA-I has nine kinds of non-isomorphic configurations and high mobility under unstructured environments. Four generations of its platform have been developed and a series of researches have been carried out on their reconfiguration mechanism, non-isomorphic configurations, tipover stability, and reconfiguration planning. Experiments have demonstrated that such kind structure permits good mobility and high flexibility to uneven terrain. Being hyper-redundant, modularized and reconfigurable, AMOEBA-I has many possible applications such as Urban Search and Rescue (USAR) and space exploration.
Ref_1: see[3];
Ref_2: see[4]
Stochastic-3D (2005)
High spatial resolution for arbitrary three-dimensional shape formation with modular robots can be accomplished using lattice system with large quantities of very small, prospectively microscopic modules. At small scales, and with large quantities of modules, deterministic control over reconfiguration of individual modules will become unfeasible, while stochastic mechanisms will naturally prevail. Microscopic size of modules will make the use of electromagnetic actuation and interconnection prohibitive, as well, as the use of on-board power storage.
Three large scale prototypes were built in attempt to demonstrate dynamically programmable three-dimensional stochastic reconfiguration in a neutral-buoyancy environment. The first prototype used electromagnets for module reconfiguration and interconnection. The modules were 100 mm cubes and weighed 0.81 kg. The second prototype used stochastic fluidic reconfiguration and interconnection mechanism. Its 130 mm cubic modules weighed 1.78 kg each and made reconfiguration experiments excessively slow. The current third implementation inherits the fluidic reconfiguration principle. The lattice grid size is 80 mm, and the reconfiguration experiments are under way.[17]
Molecubes(2005)
This hybrid self-reconfiguring system was built by theCornellComputational Synthesis Lab to physically demonstrate artificial kinematic self-reproduction. Each module is a 0.65 kg cube with 100 mm long edges and one rotational degree of freedom. The axis of rotation is aligned with the cube's longest diagonal. Physical self-reproduction of both a three- and a four-module robot was demonstrated.[18]It was also shown that, disregarding the gravity constraints, an infinite number of self-reproducing chain meta-structures can be built from Molecubes. More information can be found at theCreative Machines Labself-replication page.
The Programmable Parts (2005)
The programmable parts are stirred randomly on an air-hockey table by randomly actuated air jets. When they collide and stick, they can communicate and decide whether to stay stuck, or if and when to detach. Local interaction rules can be devised and optimized to guide the robots to make any desired global shape. More information can be found at theprogrammable parts web page.
SuperBot(2006)
The SuperBot modules fall into the hybrid architecture. The modules have three degrees of freedom each. The design is based on two previous systems:Conro(by the same research group) andMTRAN(by Murata et al.). Each module can connect to another module through one of its six dock connectors. They can communicate and share power through their dock connectors. Several locomotion gaits have been developed for different arrangements of modules. For high-level communication the modules use hormone-based control, a distributed, scalable protocol that does not require the modules to have unique ID's.
Miche (2006)
The Miche system is a modular lattice system capable of arbitrary shape formation. Each module is an autonomous robot module capable of connecting to and communicating with its immediate neighbors. When assembled into a structure, the modules form a system that can be virtually sculpted using a computer interface and a distributed process. The group of modules collectively decide who is on the final shape and who is not using algorithms that minimize the information transmission and storage. Finally, the modules not in the structure let go and fall off under the control of an external force, in this case gravity.
More details atMiche(Rus et al.).
The Distributed Flight Array(2009)
The Distributed Flight Array is a modular robot consisting of hexagonal-shaped single-rotor units that can take on just about any shape or form. Although each unit is capable of generating enough thrust to lift itself off the ground, on its own it is incapable of flight much like a helicopter cannot fly without its tail rotor. However, when joined, these units evolve into a sophisticated multi-rotor system capable of coordinated flight and much more. More information can be found at DFA.[19]
Roombots (2009)
Roombots[20]have a hybrid architecture. Each module has three degrees of freedom, two of them using the diametrical axis within a regular cube, and a third (center) axis of rotation connecting the two spherical parts. All three axes are continuously rotatory. The outer Roombots DOF is using the same axis-orientation as Molecubes, the third, central Roombots axis enables the module to rotate its two outer DOF against each other. This novel feature enables a single Roombots module to locomote on flat terrain, but also to climb a wall, or to cross a concave, perpendicular edge. Convex edges require the assembly of at least two modules into a Roombots "Metamodule". Each module has ten available connector slots, currently two of them are equipped with an active connection mechanism based on mechanical latches.
Roombots are designed for two tasks: to eventually shape objects of daily life, e.g. furniture, and to locomote, e.g. as a quadruped or a tripod robot made from multiple modules.
More information can be found at Roombots webpage.[21]
Sambot (2010)
Being inspired from social insects, multicellular organism and morphogenetic robots, the aim of the Sambot[22]is to developswarm roboticsand conduct research on theswarm intelligence, self-assembly and co-evolution of the body and brain for autonomous morphogeneous. Differing from swarm robot, self-reconfigurable robot and morphogenetic robot, the research focuses on self-assembly swarm modular robots that interact and dock as an autonomous mobile module with others to achieve swarm intelligence and furtherly discuss the autonomous construction in space station and exploratory tools and artificial complex structures. Each Sambot robot can run as an autonomous individual in wheel and besides, using combination of the sensors and docking mechanism, the robot can interact and dock with the environments and other robots. By the advantage of motion and connection, Sambot swarms can aggregate into a symbiotic or whole organism and generate locomotion as the bionic articular robots. In this case, some self-assembling, self-organizing, self-reconfiguring, and self-repairing function and research are available in design and application view. Inside the modular robot whose size is 80(W)X80(L)X102(H) mm, MCU (ARM and AVR), communication (Zigbee), sensors, power, IMU, positioning modules are embedded.
More information can be found at "Self-assembly Swarm Modular Robots".[23]
It is mathematically proven that physical strings or chains of simple shapes can be folded into any continuous area or volumetric shape. Moteins employ such shape-universal folding strategies, with as few as one (for 2D shapes) or two (for 3D shapes) degrees of freedom and simple actuators with as few as two (for 2D shapes) or three (for 3D shapes) states per unit.[24]
Symbrion(Symbiotic Evolutionary Robot Organisms) was a project funded by the European Commission between 2008 and 2013 to develop a framework in which a homogeneous swarm of miniature interdependent robots can co-assemble into a larger robotic organism to gain problem-solving momentum. One of the key aspects of Symbrion is inspired by the biological world: an artificial genome that allows storing and evolution of suboptimal configurations in order to increase the speed of adaptation. A large part of the developments within Symbrion is open-source and open-hardware.[25]
Space Engineis an autonomous kinematic platform with variable morphology, capable of creating or manipulating the physical space (living space, work space, recreation space). Generating its own multi-directional kinetic force to manipulate objects and perform tasks.
At least 3 or more locks for each module, able the automatically attach or detach to its immediate modules to form rigid structures. Modules propel in a linear motion forward or backward alone X, Y or Z spacial planes, while creates their own momentum forces, able to propel itself by the controlled pressure variation created between one or more of its immediate modules.
Using Magnetic pressures to attract and/or repel with its immediate modules. While the propelling module use its electromagnets to pull or push forward along the roadway created by The statistic modules, the statistic modules pull or push the propelling modules forward. Increasing the number of modular for displacement also increases the total momentum or push/pull forces. The number of Electromagnets on each module can change according to requirements of the design.
The modules on the exterior of the matrices can't displace independently on their own, due to lack of one or more reaction face from immediate modules. They are moved by attaching to modules in the interior of the matrices, that can form complete roadway for displacement.
Since the early demonstrations of early modular self-reconfiguring systems, the size, robustness and performance has been continuously improving. In parallel, planning and control algorithms have been progressing to handle thousands of units. There are, however, several key steps that are necessary for these systems to realize their promise ofadaptability, robustness and low cost. These steps can be broken down into challenges in the hardware design, in planning and control algorithms and in application. These challenges are often intertwined.
The extent to which the promise of self-reconfiguring robotic systems can be realized depends critically on the numbers of modules in the system. To date, only systems with up to about 50 units have been demonstrated, with this number stagnating over almost a decade. There are a number of fundamental limiting factors that govern this number:
Though algorithms have been developed for handling thousands of units in ideal conditions, challenges to scalability remain both in low-level control and high-level planning to overcome realistic constraints:
Though the advantages of Modular self-reconfiguring robotic systems is largely recognized, it has been difficult to identify specific application domains where benefits can be demonstrated in the short term. Some suggested applications are
Several robotic fields have identifiedGrand Challengesthat act as a catalyst for development and serve as a short-term goal in absence of immediatekiller apps. The Grand Challenge is not in itself a research agenda or milestone, but a means to stimulate and evaluate coordinated progress across multiple technical frontiers. Several Grand Challenges have been proposed for the modular self-reconfiguring robotics field:
A unique potential solution that can be exploited is the use of inductors as transducers. This could be useful for dealing with docking and bonding problems. At the same time it could also be beneficial for its capabilities of docking detection (alignment and finding distance), power transmission, and (data signal) communication. A proof-of-concept video can be seenhere. The rather limited exploration down this avenue is probably a consequence of the historical lack of need in any applications for such an approach.
Self-Reconfiguring and Modular Technologyis a group for discussion of the perception and understanding of the developing field.robotics.
Modular Robotics Google Groupis an open public forum dedicated to announcements of events in the field of Modular Robotics. This medium is used to disseminate calls to workshops, special issues and other academic activities of interest to modular robotics researchers. The founders of this Google group intend it to facilitate the exchange of information and ideas within the community of modular robotics researchers around the world and thus promote acceleration of advancements in modular robotics. Anybody who is interested in objectives and progress of Modular Robotics can join this Google group and learn about the new developments in this field.
|
https://en.wikipedia.org/wiki/Self-reconfiguring_modular_robot
|
Social simulationis a research field that appliescomputationalmethods to study issues in thesocial sciences. The issues explored include problems incomputational law,psychology,[1]organizational behavior,[2]sociology, political science,economics, anthropology, geography,engineering,[2]archaeology andlinguistics(Takahashi, Sallach & Rouchier 2007).
Social simulation aims to cross the gap between the descriptive approach used in the social sciences and the formal approach used in the natural sciences, by moving the focus on the processes/mechanisms/behaviors that build the social reality.
In social simulation, computers support human reasoning activities by executing these mechanisms. This field explores the simulation of societies ascomplex non-linear systems, which are difficult to study with classical mathematical equation-based models.Robert Axelrodregards social simulation as a third way of doing science, differing from both the deductive and inductive approach; generating data that can be analysed inductively, but coming from a rigorously specified set of rules rather than from direct measurement of the real world. Thus, simulating a phenomenon is akin to generating it—constructing artificial societies. These ambitious aims have encounteredseveral criticisms.
The social simulation approach to the social sciences is promoted and coordinated by four regional associations, the European Social Simulation Association (ESSA) for Europe, the Asian Social Simulation Association (ASSA) for Asia, the Computational Social Science Society of the Americas (CSSS) in North America, and the Pan-Asian Association for Agent-based Approach in Social Systems Sciences (PAAA) in Pacific Asia.
The history of theagent-based modelcan be traced back to theVon Neumann machine, a theoretical machine capable of reproducing itself. The deviceJohn von Neumannproposed woud follow precisely detailed instructions to fashion a copy of itself. The concept was then improved by von Neumann's friendStanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of devices later termedcellular automata.
Another improvement was brought by mathematician,John Conway. He constructed the well-knownGame of Life. Unlike the von Neumann's machine, Conway's Game of Life operated by simple rules in a virtual world in the form of a 2-dimensionalcheckerboard.
The birth of the agent-based model as a model for social systems was primarily brought about by a computer scientist,Craig Reynolds. He tried to model the reality of lively biological agents, known as theartificial life, a term coined byChristopher Langton.
Joshua M. EpsteinandRobert Axtelldeveloped the first large scale agent model, theSugarscape, to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, transmission of disease, and even culture.
Kathleen M. Carleypublished "Computational Organizational Science and Organizational Engineering" defining the movement of simulation into
organizations, established a journal for social simulation applied to organizations and complex socio-technical systems:Computational and Mathematical Organization Theory, and was the founding president of the North American Association of Computational Social and Organizational Systems that morphed into the current CSSSA.
Nigel Gilbertpublished withKlaus G. Troitzschthe first textbook on social simulation: "Simulation for the Social Scientist" (1999) and established its most relevant journal: theJournal of Artificial Societies and Social Simulation.
More recently,Ron Sundeveloped methods for basing agent-based simulation on models of human cognition, known ascognitive social simulation(see (Sun 2006))
Here are some sample topics that have been explored with social simulation:
Social simulation can refer to a general class of strategies for understanding social dynamics using computers to simulate social systems. Social simulation allows for a more systematic way of viewing the possibilities of outcomes.
There are four major types of social simulation:
A social simulation may fall within the rubric ofcomputational sociologywhich is a recently developed branch ofsociologythat usescomputationto analyze social phenomena. The basic premise of computational sociology is to take advantage ofcomputer simulations(Polhill & Edmonds 2007) in the construction of social theories. It involves the understanding ofsocial agents, the interaction among these agents, and the effect of these interactions on the social aggregate. Although the subject matter and methodologies insocial sciencediffer from those innatural scienceorcomputer science, several of the approaches used in contemporary socialsimulationoriginated from fields such asphysicsandartificial intelligence.
System Level Simulation (SLS) is the oldest level of social simulation. System level simulation looks at the situation as a whole. This theoretical outlook on social situations uses a wide range of information to determine what should happen to society and its members if certain variables are present. Therefore, with specific variables presented, society and its members should have a certain response to the new situation. Navigating through this theoretical simulation will allow researchers to develop educated ideas of what will happen under some specific variables.
For example, ifNASAwere to conduct a system level simulation it would benefit the organization by providing a cost-effective research method to navigate through the simulation. This allows the researcher to steer through the virtual possibilities of the given simulation and developsafetyprocedures, and to produce proven facts about how a certain situation will play out. (National Research 2006)
System level modeling (SLM) aims to specifically predict (unlike system level simulation's generalization in prediction) and convey any number of actions, behaviors, or other theoretical possibilities of nearly any person, object, construct et cetera within a system using a large set of mathematical equations and computer programming in the form of models.
A model is a representation of a specific thing ranging from objects and people to structures and products created through mathematical equations and are designed, using computers, in such a way that they are able to stand-in as the aforementioned things in a study. Models can be either simplistic or complex, depending on the need for either; however, models are intended to be simpler than what they are representing while remaining realistically similar in order to be used accurately. They are built using a collection of data that is translated into computing languages that allow them to represent the system in question. These models, much like simulations, are used to help us better understand specific roles and actions of different things so as to predict behavior and the like.
Agent-based social simulation(ABSS) consists of modeling different societies after artificial agents, (varying on scale) and placing them in a computer simulated society to observe the behaviors of the agents. From this data it is possible to learn about the reactions of the artificial agents and translate them into the results of non-artificial agents and simulations. Three main fields in ABSS are agent-based computing, social science, and computer simulation.
Agent-based computing is the design of the model and agents, while the computer simulation is the part of the simulation of the agents in the model and the outcomes. The social science is a mixture of sciences and social part of the model. It is where the social phenomena is developed and theorized. The main purpose of ABSS is to provide models and tools for agent-based simulation of social phenomena. With ABSS we can explore different outcomes for phenomena where we might not be able to view the outcome in real life. It can provide us valuable information on society and the outcomes of social events or phenomena.
Agent-based modeling(ABM) is a system in which a collection of agents independently interact on networks. Each individual agent is responsible for different behaviors that result in collective behaviors. These behaviors as a whole help to define the workings of the network. ABM focuses on human social interactions and how people work together and communicate with one another without having one, single "group mind". This essentially means that it tends to focus on the consequences of interactions between people (the agents) in a population. Researchers are better able to understand this type of modeling by modeling these dynamics on a smaller, more localized level. Essentially, ABM helps to better understand interactions between people (agents) who, in turn, influence one another (in response to these influences). Simple individual rules or actions can result in coherentgroup behavior. Changes in these individual acts can affect the collective group in any given population.
Agent-based modeling is an experimental tool for theoretical research. It enables one to deal with more complex individual behaviors, such as adaptation. Overall, through this type of modeling, the creator, or researcher, aims to model behavior of agents and the communication between them in order to better understand how these individual interactions impact an entire population. In essence, ABM is a way of modeling and understanding different global patterns.
There are several current research projects that relate directly to modeling and agent-based simulation the following are listed below with a brief overview.
Agent-based modeling is most useful in providing a bridge between micro and macro levels, which is a large part of what sociology studies. Agent-based models are most appropriate for studying processes that lack central coordination, including the emergence of institutions that, once established, impose order from the top down. The models focus on how simple and predictable local interactions generate familiar but highly detailed global patterns, such as emergence ofnormsandparticipationof collective action. Michael W. Macy and Robert Willer researched a recent survey of applications and found that there were two main problems with agent-based modeling theself-organizationof social structure and theemergenceofsocial order(Macy & Willer 2002). Below is a brief description of each problem Macy and Willer believe there to be;
These examples simply show the complexity of our environment and that agent-based models are designed to explore the minimal conditions, the simplest set of assumptions about human behavior, required for a given social phenomenon to emerge at a higher level of organization.
Since its creation, computerized social simulation has been the target of some criticism in regard to its practicality and accuracy. Social simulation's simplification of the complex to form models from which we can better understand the latter is sometimes seen as a draw back, as using fairly simple models to simulate real life with computers is not always the best way to predict behavior.
Most of the criticism seems to be aimed at agent-based models and simulation and how they work:
Researchers working in social simulation might respond that the competing theories from thesocial sciencesare far simpler than those achieved through simulation and therefore suffer the aforementioned drawbacks much more strongly. Theories in some social science tend to be linear models that are not dynamic, and are generally inferred from small laboratoryexperiments(laboratory tests are most common in psychology but rare in sociology, political science, economics and geography). The behavior of populations of agents under these models is rarely tested or verified against empirical observation.
|
https://en.wikipedia.org/wiki/Social_simulation
|
Swarm intelligence(SI) is thecollective behaviorofdecentralized,self-organizedsystems, natural or artificial. The concept is employed in work onartificial intelligence. The expression was introduced byGerardo Beniand Jing Wang in 1989, in the context of cellular robotic systems.[1][2]
Swarm intelligence systems consist typically of a population of simpleagentsorboidsinteracting locally with one another and with their environment.[3]The inspiration often comes from nature, especially biological systems.[4]The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to theemergenceof "intelligent" global behavior, unknown to the individual agents.[5]Examples of swarm intelligence in natural systems includeant colonies,bee colonies, birdflocking, hawkshunting, animalherding,bacterial growth, fishschoolingandmicrobial intelligence.
The application of swarm principles torobotsis calledswarm roboticswhileswarm intelligencerefers to the more general set of algorithms.Swarm predictionhas been used in the context of forecasting problems. Similar approaches to those proposed for swarm robotics are considered forgenetically modified organismsin synthetic collective intelligence.[6]
Boids is anartificial lifeprogram, developed byCraig Reynoldsin 1986, which simulatesflocking. It was published in 1987 in the proceedings of theACMSIGGRAPHconference.[7]The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object.[8]
As with most artificial life simulations, Boids is an example ofemergentbehavior; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:
More complex rules can be added, such as obstacle avoidance and goal seeking.
Self-propelled particles (SPP), also referred to as theVicsek model, was introduced in 1995 byVicseket al.[9]as a special case of theboidsmodel introduced in 1986 byReynolds.[7]A swarm is modelled in SPP by a collection of particles that move with a constant speed but respond to a random perturbation by adopting at each time increment the average direction of motion of the other particles in their local neighbourhood.[10]SPP models predict that swarming animals share certain properties at the group level, regardless of the type of animals in the swarm.[11]Swarming systems give rise toemergent behaviourswhich occur at many different scales, some of which are turning out to be both universal and robust. It has become a challenge in theoretical physics to find minimal statistical models that capture these behaviours.[12][13][14]
Evolutionary algorithms(EA),particle swarm optimization(PSO),differential evolution(DE),ant colony optimization(ACO) and their variants dominate the field of nature-inspiredmetaheuristics.[15]This list includes algorithms published up to circa the year 2000. A large number of more recent metaphor-inspired metaheuristics have started toattract criticism in the research communityfor hiding their lack of novelty behind an elaborate metaphor.[16][17][18]For algorithms published since that time, seeList of metaphor-based metaheuristics.
Metaheuristicslack a confidence in a solution.[19]When appropriate parameters are determined, and when sufficient convergence stage is achieved, they often find a solution that is optimal, or near close to optimum – nevertheless, if one does not know optimal solution in advance, a quality of a solution is not known.[19]In spite of this obvious drawback it has been shown that these types ofalgorithmswork well in practice, and have been extensively researched, and developed.[20][21][22][23][24]On the other hand, it is possible to avoid this drawback by calculating solution quality for a special case where such calculation is possible, and after such run it is known that every solution that is at least as good as the solution a special case had, has at least a solution confidence a special case had. One such instance isAnt-inspiredMonte Carlo algorithmforMinimum Feedback Arc Setwhere this has been achieved probabilistically via hybridization ofMonte Carlo algorithmwithAnt Colony Optimizationtechnique.[25]
Ant colony optimization (ACO), introduced by Dorigo in his doctoral dissertation, is a class ofoptimizationalgorithmsmodeled on the actions of anant colony. ACO is aprobabilistic techniqueuseful in problems that deal with finding better paths through graphs. Artificial 'ants'—simulation agents—locate optimal solutions by moving through aparameter spacerepresenting all possible solutions. Natural ants lay downpheromonesdirecting each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate for better solutions.[26]
Particle swarm optimization (PSO) is aglobal optimizationalgorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initialvelocity, as well as a communication channel between the particles.[27][28]Particles then move through the solution space, and are evaluated according to somefitnesscriterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such assimulated annealingis that the large number of members that make up the particle swarm make the technique impressively resilient to the problem oflocal minima.
Karaboga introduced ABC metaheuristic in 2005 as an answer to optimize numerical problems. Inspired byhoney beeforaging behavior, Karaboga's model had three components. The employed, onlooker, and scout. In practice, the artificial scout bee would expose all food source positions (solutions) good or bad. The employed bee would search for the shortest route to each position to extract the food amount (quality) of the source. If the food wasdepletedfrom the source, the employed bee would become a scout and randomly search for other food sources. Each source that became abandoned created negative feedback meaning, the answers found were poor solutions. The onlooker bees wait for employed bees to either abandon a source or give information that the source has a large quantity of food and is worth sending additional resources to. The more an onlooker bee is recruited, the more positive the feedback is meaning that the answer is likely a good solution.
Artificial Swarm Intelligence (ASI) is method of amplifying the collective intelligence of networked human groups using control algorithms modeled after natural swarms. Sometimes referred to as Human Swarming or Swarm AI, the technology connects groups of human participants into real-time systems that deliberate and converge on solutions as dynamic swarms when simultaneously presented with a question[29][30][31]ASI has been used for a wide range of applications, from enabling business teams to generate highly accurate financial forecasts[32]to enabling sports fans to outperform Vegas betting markets.[33]ASI has also been used to enable groups of doctors to generate diagnoses with significantly higher accuracy than traditional methods.[34][35]ASI has been used by theFood and Agriculture Organization (FAO)of theUnited Nationsto help forecast famines in hotspots around the world.[36][better source needed]
Swarm Intelligence-based techniques can be used in a number of applications. The U.S. military is investigating swarm techniques for controlling unmanned vehicles. TheEuropean Space Agencyis thinking about an orbital swarm for self-assembly and interferometry.NASAis investigating the use of swarm technology for planetary mapping. A 1992 paper byM. Anthony LewisandGeorge A. Bekeydiscusses the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors.[37]Conversely al-Rifaie and Aber have usedstochastic diffusion searchto help locate tumours.[38][39]Swarm intelligence (SI) is increasingly applied in Internet of Things (IoT)[40][41]systems, and by association to Intent-Based Networking (IBN),[42]due to its ability to handle complex, distributed tasks through decentralized, self-organizing algorithms. Swarm intelligence has also been applied fordata mining[43]andcluster analysis.[44]Ant-based models are further subject of modern management theory.[45]
The use of swarm intelligence intelecommunication networkshas also been researched, in the form ofant-based routing. This was pioneered separately by Dorigo et al. andHewlett-Packardin the mid-1990s, with a number of variants existing. Basically, this uses aprobabilisticrouting table rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which flood the network. Reinforcement of the route in the forwards, reverse direction and both simultaneously have been researched: backwards reinforcement requires a symmetric network and couples the two directions together; forwards reinforcement rewards a route before the outcome is known (but then one would pay for the cinema before one knows how good the film is). As the system behaves stochastically and is therefore lacking repeatability, there are large hurdles to commercial deployment. Mobile media and new technologies have the potential to change the threshold for collective action due to swarm intelligence (Rheingold: 2002, P175).
The location of transmission infrastructure for wireless communication networks is an important engineering problem involving competing objectives. A minimal selection of locations (or sites) are required subject to providing adequate area coverage for users. A very different, ant-inspired swarm intelligence algorithm, stochastic diffusion search (SDS), has been successfully used to provide a general model for this problem, related to circle packing and set covering. It has been shown that the SDS can be applied to identify suitable solutions even for large problem instances.[46]
Airlines have also used ant-based routing in assigning aircraft arrivals to airport gates. AtSouthwest Airlinesa software program uses swarm theory, or swarm intelligence—the idea that a colony of ants works better than one alone. Each pilot acts like an ant searching for the best airport gate. "The pilot learns from his experience what's the best for him, and it turns out that that's the best solution for the airline,"Douglas A. Lawsonexplains. As a result, the "colony" of pilots always go to gates they can arrive at and depart from quickly. The program can even alert a pilot of plane back-ups before they happen. "We can anticipate that it's going to happen, so we'll have a gate available," Lawson says.[47]
Artists are using swarm technology as a means of creating complex interactive systems orsimulating crowds.[citation needed]
The Lord of the Ringsfilm trilogymade use of similar technology, known asMassive (software), during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.
Stanley and Stella in: Breaking the Icewas the first movie to make use of swarm technology for rendering, realistically depicting the movements of groups of fish and birds using the Boids system.[citation needed]
Tim Burton'sBatman Returnsalso made use of swarm technology for showing the movements of a group of bats.[48]
Airlines have used swarm theory to simulate passengers boarding a plane. Southwest Airlines researcher Douglas A. Lawson used an ant-based computer simulation employing only six interaction rules to evaluate boarding times using various boarding methods.(Miller, 2010, xii-xviii).[49]
Networks of distributed users can be organized into "human swarms" through the implementation of real-time closed-loop control systems.[50][51]Developed byLouis Rosenbergin 2015, human swarming, also called artificial swarm intelligence, allows the collective intelligence of interconnected groups of people online to be harnessed.[52][53]The collective intelligence of the group often exceeds the abilities of any one member of the group.[54]
Stanford University School of Medicinepublished in 2018 a study showing that groups of human doctors, when connected together by real-time swarming algorithms, could diagnose medical conditions with substantially higher accuracy than individual doctors or groups of doctors working together using traditional crowd-sourcing methods. In one such study, swarms of human radiologists connected together were tasked with diagnosing chest x-rays and demonstrated a 33% reduction in diagnostic errors as compared to the traditional human methods, and a 22% improvement over traditional machine-learning.[34][55][56][35]
TheUniversity of California San Francisco (UCSF) School of Medicinereleased apreprintin 2021 about the diagnosis ofMRI imagesby small groups of collaborating doctors. The study showed a 23% increase in diagnostic accuracy when using Artificial Swarm Intelligence (ASI) technology compared to majority voting.[57][58]
Swarm grammars are swarms ofstochastic grammarsthat can be evolved to describe complex properties such as found in art and architecture.[59]These grammars interact as agents behaving according to rules of swarm intelligence. Such behavior can also suggestdeep learningalgorithms, in particular when mapping of such swarms to neural circuits is considered.[60]
In a series of works, al-Rifaie et al.[61]have successfully used two swarm intelligence algorithms—one mimicking the behaviour of one species of ants (Leptothorax acervorum) foraging (stochastic diffusion search, SDS) and the other algorithm mimicking the behaviour of birds flocking (particle swarm optimization, PSO)—to describe a novel integration strategy exploiting the local search properties of the PSO with global SDS behaviour. The resultinghybrid algorithmis used to sketch novel drawings of an input image, exploiting an artistic tension between the local behaviour of the 'birds flocking'—as they seek to follow the input sketch—and the global behaviour of the "ants foraging"—as they seek to encourage the flock to explore novel regions of the canvas. The "creativity" of this hybrid swarm system has been analysed under the philosophical light of the "rhizome" in the context ofDeleuze's "Orchid and Wasp" metaphor.[62]
A more recent work of al-Rifaie et al., "Swarmic Sketches and Attention Mechanism",[63]introduces a novel approach deploying the mechanism of 'attention' by adapting SDS to selectively attend to detailed areas of a digital canvas. Once the attention of the swarm is drawn to a certain line within the canvas, the capability of PSO is used to produce a 'swarmic sketch' of the attended line. The swarms move throughout the digital canvas in an attempt to satisfy their dynamic roles—attention to areas with more details—associated with them via their fitness function. Having associated the rendering process with the concepts of attention, the performance of the participating swarms creates a unique, non-identical sketch each time the 'artist' swarms embark on interpreting the input line drawings. In other works, while PSO is responsible for the sketching process, SDS controls the attention of the swarm.
In a similar work, "Swarmic Paintings and Colour Attention",[64]non-photorealistic images are produced using SDS algorithm which, in the context of this work, is responsible for colour attention.
The "computational creativity" of the above-mentioned systems are discussed in[61][65][66]through the two prerequisites of creativity (i.e. freedom and constraints) within the swarm intelligence's two infamous phases of exploration and exploitation.
Michael Theodore andNikolaus Correlluse swarm intelligent art installation to explore what it takes to have engineered systems to appear lifelike.[67]
|
https://en.wikipedia.org/wiki/Swarm_intelligence
|
Swarm roboticsis the study of how to design independent systems of robots without centralized control. The emergingswarming behaviorof robotic swarms is created through the interactions between individual robots and the environment.[1]This idea emerged on the field ofartificial swarm intelligence, as well as the studies of insects, ants and other fields in nature, whereswarm behavioroccurs.[2]
Relatively simple individual rules can produce a large set of complexswarm behaviors. A key component is the communication between the members of the group that build a system of constant feedback. The swarm behavior involves constant change of individuals in cooperation with others, as well as the behavior of the whole group.
The design of swarm robotics systems is guided by swarm intelligence principles, which promote fault tolerance, scalability, and flexibility.[1]Unlike distributed robotic systems in general, swarm robotics emphasizes a large number of robots. While various formulations of swarm intelligence principles exist, one widely recognized set includes:
Miniaturization is also key factor in swarm robotics, as the effect of thousands of small robots can maximize the effect of the swarm-intelligent approach to achieve meaningful behavior at swarm-level through a greater number of interactions on an individual level.[5]
Compared with individual robots, a swarm can commonly decompose its given missions to their subtasks;[6]a swarm is more robust to partial failure and is more flexible with regard to different missions.[7]
The phrase "swarm robotics" was reported to make its first appearance in 1991 according to Google Scholar, but research regarding swarm robotics began to grow in early 2000s. The initial goal of studying swarm robotics was to test whether the concept ofstigmergycould be used as a method for robots to indirectly communication and coordinate with each other.[5]
One of the first international projects regarding swarm robotics was the SWARM-BOTS project funded by the European Commission between 2001 and 2005, in which a swarm of up to 20 of robots capable of independently physically connect to each other to form a cooperating system were used to study swarm behaviors such as collective transport, area coverage, and searching for objects. The result was demonstration of self-organized teams of robots that cooperate to solve a complex task, with the robots in the swarm taking different roles over time. This work was then expanded upon through the Swarmanoid project (2006–2010), which extended the ideas and algorithms developed in Swarm-bots to heterogeneous robot swarms composed of three types of robots—flying, climbing, and ground-based—that collaborated to carry out a search and retrieval task.[5]
There are many potential applications for swarm robotics.[8]They include tasks that demandminiaturization(nanorobotics,microbotics), like distributed sensing tasks inmicromachineryor the human body. A promising use of swarm robotics is insearch and rescuemissions.[9]Swarms of robots of different sizes could be sent to places that rescue-workers cannot reach safely, to explore the unknown environment and solve complex mazes via onboard sensors.[9]Swarm robotics can also be suited to tasks that demand cheap designs, for instanceminingor agricultural shepherding tasks.[10]
Drone swarms are used in target search,drone displays, and delivery. A drone display commonly uses multiple, lighted drones at night for an artistic display or advertising. A delivery drone swarm can carry multiple packages to a single destination at a time and overcome a single drone's payload and battery limitations.[11]A drone swarm may undertake differentflight formationsto reduce overall energy consumption due to drag forces.[12]
Drone swarming can also introduce additional control issues connected to human factors and the swarm operator. Examples of this include high cognitive demand and complexity when interacting with multiple drones due to changing attention between different individual drones.[13][14]Communication between operator and swarm is also a central aspect.[15]
More controversially, swarms ofmilitary robotscan form an autonomous army. U.S. Naval forces have tested a swarm of autonomous boats that can steer and take offensive actions by themselves. The boats are unmanned and can be fitted with any kind of kit to deter and destroy enemy vessels.[16]
During theSyrian Civil War, Russian forces in the region reported attacks on their main air force base in the country by swarms of fixed-wing drones loaded with explosives.[17]
Another large set of applications may be solved using swarms ofmicro air vehicles, which are also broadly investigated nowadays. In comparison with the pioneering studies of swarms of flying robots using precisemotion capturesystems in laboratory conditions,[18]current systems such asShooting Starcan control teams of hundreds of micro aerial vehicles in outdoor environment[19]usingGNSSsystems (such as GPS) or even stabilize them using onboardlocalizationsystems[20]where GPS is unavailable.[21][22]Swarms of micro aerial vehicles have been already tested in tasks of autonomous surveillance,[23]plume tracking,[24]and reconnaissance in a compact phalanx.[25]Numerous works on cooperative swarms of unmanned ground and aerial vehicles have been conducted with target applications of cooperative environment monitoring,[26]simultaneous localization and mapping,[27]convoy protection,[28]and moving target localization and tracking.[29]
In 2023, University of Washington and Microsoft researchers demonstrated acoustic swarms of tiny robots that create shape-changing smart speakers.[30]These can be used for manipulating acoustic scenes to focus on or mute sounds from a specific region in a room.[31]Here, tiny robots cooperate with each other using sound signals, without any cameras, to navigate cooperatively with centimeter-level accuracy. These swarm devices spread out across a surface to create a distributed and reconfigurable wireless microphone array. They also navigate back to the charging station where they can be automatically recharged.[32]
Most efforts have focused on relatively small groups of machines. However, aKilobotswarm consisting of 1,024 individual robots was demonstrated by Harvard in 2014, the largest to date.[33]
Another example of miniaturization is the LIBOT Robotic System[34]that involves a low cost robot built for outdoor swarm robotics. The robots are also made with provisions for indoor use via Wi-Fi, since the GPS sensors provide poor communication inside buildings.
Another such attempt is the micro robot (Colias),[35]built in the Computer Intelligence Lab at theUniversity of Lincoln, UK. This micro robot is built on a 4 cm circular chassis and is a low-cost and open platform for use in a variety of swarm robotics applications.
Additionally, progress has been made in the application of autonomous swarms in the field of manufacturing, known asswarm 3D printing. This is particularly useful for the production of large structures and components, where traditional3D printingis not able to be utilized due to hardware size constraints. Miniaturization and mass mobilization allows the manufacturing system to achievescale invariance, not limited in effective build volume. While in its early stage of development, swarm 3D printing is currently being commercialized by startup companies.[36]
|
https://en.wikipedia.org/wiki/Swarm_robotics
|
Crosslinguistic influence(CLI) refers to the different ways in which one language can affect another within an individual speaker. It typically involves two languages that can affect one another in a bilingual speaker.[1]An example of CLI is the influence of Korean on a Korean native speaker who is learning Japanese or French. Less typically, it could also refer to an interaction between differentdialectsin the mind of a monolingual speaker. CLI can be observed acrosssubsystems of languagesincluding pragmatics, semantics, syntax, morphology, phonology, phonetics, and orthography.[2]Discussed further in this article are particular subcategories of CLI—transfer, attrition, the complementarity principle, and additional theories.
The question of how languages influence one another within a bilingual individual can be addressed both with respect to mature bilinguals and with respect to bilingual language acquisition. With respect to bilingual language acquisition in children, there are several hypotheses that examine the internal representation of bilinguals' languages. Volterra and Taeschner proposed theSingle System Hypothesis,[3]which states that children start out with one single system that develops into two systems. This hypothesis proposed that bilingual children go through three stages of acquisition.
In response to theSingle System Hypothesis, a different hypothesis developed regarding the idea of two separate language systems from the very beginning. It was based on evidence of monolinguals and bilinguals reaching the same milestones at approximately the same stage of development.[4][5]For example, bilingual and monolingual children go through identical patterns of grammar development. This hypothesis, called theSeparate Development Hypothesis, held the notion that the bilinguals acquiring two languages would internalize and acquire the two languages separately. Evidence for this hypothesis comes from lack of transfer and lack of acceleration.[6]Transfer is a grammatical property of one language used in another language. Acceleration is the acquisition of a feature in language A facilitating the acquisition of a feature in language B.[7]In a study of Dutch-English bilingual children, there were no instances of transfer across elements of morphology and syntactic development, indicating that the two languages developed separately from each other.[8]In addition, in a study of French-English bilingual children, there were no instances of acceleration becausefinitenessappeared much earlier in French than it did in English, suggesting that there was no facilitation of the acquisition of finiteness in English by acquisition in French.[6]Under this hypothesis, bilingual acquisition would be equivalent to monolingual children acquiring the particular languages.[8]
In response to both the previous hypotheses mentioned, theInterdependent Development Hypothesisemerged with the idea that there is some sort of interaction between the two language systems in acquisition. It proposed that there is no single language system, but the language systems are not completely separate either. This hypothesis is also known as theCrosslinguistic Hypothesis, developed by Hulk and Müller. TheCrosslinguistic Hypothesisstates that influence will occur in bilingual acquisition in areas of particular difficulty, even for monolingual native language acquisition. It re-examined the extent of the differentiation of the language systems due to the interaction in difficult areas of bilingual acquisition.[9][10]Evidence for this hypothesis comes from delay, acceleration, and transfer in particular areas of bilingual language acquisition. Delay is the acquisition of a property of language A later than normally expected because of the acquisition of language B.[6]CLI is seen when the child has a dominant language, such as Cantonese influencing English when Cantonese is the dominant language,[11]and it will only occur in certain domains. Below are the two proposals represented in theCrosslinguistic Hypothesiswhere CLI may occur.[12]
Since the development of theCrosslinguistic Hypothesis, much research has contributed to the understanding of CLI in areas of structural overlap, directionality, dominance, interfaces, the role of input, and the role of processing and production.[1]
In linguistics,language transferis defined by behaviorist psychologists as the subconscious use of behaviors from one language in another. In the Applied Linguistics field, it is also known as exhibiting knowledge of a native or dominant language (L1) in one that is being learned (L2).[15]Transfer occurs in various language-related settings, such as acquiring a new language and when two languages or two dialects come into contact. Transfer may depend on how similar the two languages are and the intensity of the conversational setting. Transfer is more likely to happen if the two languages are in the same language family.[15]It also occurs more at the beginning stages of L2 acquisition, when the grammar and lexicon are less developed. As the speaker's L2 proficiency increases, they will experience less transfer.[16]
Jacquelyn Schachter(1992) argues that transfer is not a process at all, but that it is improperly named. She described transfer as "an unnecessary carryover from the heyday of behaviorism." In her view, transfer is more of a constraint on the L2 learners' judgments about the constructions of the acquired L2 language. Schachter stated, "It is both a facilitating and a limiting condition on the hypothesis testing process, but it is not in and of itself a process."[17]
Language transfer can be positive or negative. Transfer between similar languages often yields correct production in the new language because the systems of both languages are similar. This correct production would be considered positive transfer.[15]An example involves aSpanish speaker(L1) who is acquiring Catalan (L2). Because the languages are so similar, the speaker could rely on their knowledge of Spanish when learning certain Catalan grammatical features and pronunciation. However, the two languages are distinct enough that the speaker's knowledge of Spanish could potentially interfere with learning Catalan properly.
Negative transfer (Interference)[18]occurs when there are little to no similarities between the L1 and L2. It is when errors and avoidance are more likely to occur in the L2. The types of errors that result from this type of transfer are underproduction, overproduction, miscomprehension, and production errors, such as substitution, calques, under/overdifferentiation and hypercorrection.[19]
Underproductionas explained by Schachter (1974),[20]is a strategy used by L2 learners to avoid producing errors when using structures, sounds, or words which they are not confident about in the L2. Avoidance is a complex phenomenon and experts do not agree on its causes or exactly what it is.[21][22]For example, Hebrew speakers acquiring English, may understand how the passive voice, 'a cake is made', works, but may prefer active voice, 'I make a cake,' thus avoiding the passive construction. Kellerman (1992) distinguishes three types of avoidance: (1) learners of the L2 make anticipations or know there is a problem with their construction and have a vague idea of the target construction, (2) the target is known by the L1 speaker, but it is too difficult to use in given circumstances; such as conversational topics that the L1 speaker may have a deficiency in or (3), the L1 speaker has the knowledge to correctly produce and use the L2 structure but is unwilling to use it because it goes against the norms of their behavior.[21]
Overproduction refers to an L2 learner producing certain structures within the L2 with a higher frequency than native speakers of that language. In a study by Schachter and Rutherford (1979), they found that Chinese and Japanese speakers who wrote English sentences overproduced certain types of cleft constructions:
and sentences that containedThere are/There iswhich suggests an influence of the topic marking function in their L1 appearing in their L2 English sentences.
French learners have been shown to over-rely onpresentational structureswhen introducing new referents into discourse, in their L2 Italian[23]and English.[24]This phenomenon has been observed even in the case of a target language where the presentational structure does not involve a relative pronoun, as Mandarin Chinese.[25]
Substitution is when the L1 speaker takes a structure or word from their native language and replaces it within the L2.[19]Odlin (1989) shows a sentence from a Swedish learner of English in the following sentence.
Here the Swedish wordborthas replaced its English equivalentaway.
ACalqueis a direct "loan translation" where words are translated from the L1 literally into the L2.[26]
Overdifferentiation occurs when distinctions in the L1 are carried over to the L2.
Underdifferentiation occurs when speakers are unable to make distinctions in the L2.
Hypercorrection is a process where the L1 speaker finds forms in the L2 they consider to be important to acquire, but these speakers do not properly understand the restrictions or exceptions to formal rules that are in the L2, which results in errors, such as the example below.[18]
Also related to the idea of languages interfering with one another is the concept of language attrition.Language attrition, simply put, is language loss. Attrition can occur in an L1 or an L2. According to theInterference Hypothesis(also known as theCrosslinguistic Influence Hypothesis), language transfer could contribute to language attrition.[28]If a speaker moved to a country where their L2 is the dominant language and the speaker ceased regular use of their L1, the speaker could experience attrition in their L1. However,second language attritioncould just as easily occur if the speaker moved back to a place where their L1 was the dominant language, and the speaker did not practice their L2 frequently. Attrition could also occur if a child speaker's L1 is not the dominant language in their society. A child whose L1 is Spanish, but whose socially dominant language is English, could experience attrition of their Spanish simply because they are restricted to using that language in certain domains.[12]Much research has been done on such speakers, who are calledheritage language learners. When discussing CLI, attrition is an important concept to keep in mind because it is a direct result of two or more languages coming into contact and the dominance of one over the other resulting in language loss in a speaker.
Grosjean (1997) explained the complementarity principle as the function of language use in certain domains of life leading to language dominance within that domain for a given speaker.[29]This dominance in certain domains of life (e.g. school, home, work, etc.) can lead to apparent Crosslingusitic Influence within a domain. One study found that CLI was occurring within the speech of the studied bilinguals, but the intensity of influence was subject to the domains of speech being used.[30]Argyri and Sorace (2007) found, much like many other researchers, that language dominance plays a role in the directionality of CLI.[31]These researchers found that the English dominant bilinguals had an influence of English on their Greek (concerning preverbal subjects specifically, but also the language in general), but not from their Greek to their English. On the other hand, the Greek dominant bilinguals did not show evidence of Greek influence on their English.[31]
This supports the notion that bilinguals who do not receive sufficient exposure to both languages acquire a "weaker language" and a "dominant language," and depending on similarities or differences between the languages, effects can be present or absent like that of the Greek-English example above. The effect of CLI can be primarily seen as a unidirectional occurrence, in that the L2 is likely to be affected by the L1, or simply the dominant language is more likely to affect the weaker, than the reverse. This supports the idea of individuals' susceptibility to crosslinguistic influences and the role of dominance. Take for example bilinguals who use different languages for different domains in their life; if a Spanish-English bilingual primarily uses Spanish in the home, but English in school you would expect to see English influences while speaking about school topics in Spanish and similarly you would expect Spanish influences on English when speaking about the home in English because in both instances the language being used is weaker in that domain. This is to say that not only do you see CLI from one language to another, but depending on the domains of use and the degree of proficiency or dominance, CLI can be a significant influence on speech production.
Some researchers believe that CLI may be a result of "contact-modified input," or linguistic input modified or affected by some other source such as another language.[32]This is to say that the environment surrounding the learning of another language can influence what is actually being learned. Take for example the fact that most L2 learners are receiving input or teachings from similarly speaking bilinguals; Hauser-Grüdl, Guerra, Witzmann, Leray, and Müller (2010) believe that the language being taught has already been influenced by the other in the teachers' minds and, therefore, the input the learner is receiving will exhibit influence.[32]These L2 learners will replicate influences because their input of the L2 is not as pure as input from a monolingual; meaning, what appears as CLI in the individual isn't really CLI of their L1 on their L2, but the effects of acquiring input that has already been modified. This theory has led some people to believe that all input for L2 learning will be affected and resemble CLI; however, this is not a well-supported theory of CLI or its function in L2 acquisition.
Other researchers believe that CLI is more than production influences, claiming that this linguistic exchange can impact other factors of a learner's identity. Jarvis and Pavlenko (2008) described such affected areas as experiences, knowledge, cognition, development, attention and language use, to name a few, as being major centers for change because of CLI.[33]These ideas suggest that crosslinguistic influence of syntactic, morphological, or phonological changes may just be the surface of one language's influence on the other, and CLI is instead a different developmental use of one's brain.[33]
CLI has been heavily studied by scholars, but there is still much more research needed because of the multitude of components that make up the phenomenon. Firstly, thetypologyof particular language pairings needs to be researched to differentiate CLI from the general effects bilingualism and bilingual acquisition.
Also, research is needed in specific areas of overlap between particular language pairings and the domains that influence and discourage CLI.[1]For example, most of the research studies involve European language combinations, and there is a significant lack of information regarding language combinations involving non-European languages, indigenous languages, and other minority languages.
More generally, an area of research to be further developed are the effects of CLI inmultilingualacquisition of three or more languages. There is limited research on this occurrence.[34]
Gaston, P. (2013)Syntactic error processing in bilingualism: an analysis of the Optional Infinitive stage in child language acquisition (Unpublished doctoral dissertation). Yale University.
Odlin, T. (2005). Crosslinguistic Influence And Conceptual Transfer: What Are The Concepts?Annual Review of Applied Linguistics,25.doi:10.1017/s0267190505000012
|
https://en.wikipedia.org/wiki/Crosslinguistic_influence
|
Informal language theory, acontext-free language(CFL), also called aChomskytype-2 language, is alanguagegenerated by acontext-free grammar(CFG).
Context-free languages have many applications inprogramming languages, in particular, most arithmetic expressions are generated by context-free grammars.
Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language.
The set of all context-free languages is identical to the set of languages accepted bypushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct.
An example context-free language isL={anbn:n≥1}{\displaystyle L=\{a^{n}b^{n}:n\geq 1\}}, the language of all non-empty even-length strings, the entire first halves of which area's, and the entire second halves of which areb's.Lis generated by the grammarS→aSb|ab{\displaystyle S\to aSb~|~ab}.
This language is notregular.
It is accepted by thepushdown automatonM=({q0,q1,qf},{a,b},{a,z},δ,q0,z,{qf}){\displaystyle M=(\{q_{0},q_{1},q_{f}\},\{a,b\},\{a,z\},\delta ,q_{0},z,\{q_{f}\})}whereδ{\displaystyle \delta }is defined as follows:[note 1]
Unambiguous CFLs are a proper subset of all CFLs: there areinherently ambiguousCFLs. An example of an inherently ambiguous CFL is the union of{anbmcmdn|n,m>0}{\displaystyle \{a^{n}b^{m}c^{m}d^{n}|n,m>0\}}with{anbncmdm|n,m>0}{\displaystyle \{a^{n}b^{n}c^{m}d^{m}|n,m>0\}}. This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset{anbncndn|n>0}{\displaystyle \{a^{n}b^{n}c^{n}d^{n}|n>0\}}which is the intersection of these two languages.[1]
Thelanguage of all properly matched parenthesesis generated by the grammarS→SS|(S)|ε{\displaystyle S\to SS~|~(S)~|~\varepsilon }.
The context-free nature of the language makes it simple to parse with a pushdown automaton.
Determining an instance of themembership problem; i.e. given a stringw{\displaystyle w}, determine whetherw∈L(G){\displaystyle w\in L(G)}whereL{\displaystyle L}is the language generated by a given grammarG{\displaystyle G}; is also known asrecognition. Context-free recognition forChomsky normal formgrammars was shown byLeslie G. Valiantto be reducible to Booleanmatrix multiplication, thus inheriting its complexity upper bound ofO(n2.3728596).[2][note 2]Conversely,Lillian Leehas shownO(n3−ε) Boolean matrix multiplication to be reducible toO(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.[3]
Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is calledparsing. Known parsers have a time complexity that is cubic in the size of the string that is parsed.
Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include theCYK algorithmandEarley's Algorithm.
A special subclass of context-free languages are thedeterministic context-free languageswhich are defined as the set of languages accepted by adeterministic pushdown automatonand can be parsed by aLR(k) parser.[4]
See alsoparsing expression grammaras an alternative approach to grammar and parser.
The class of context-free languages isclosedunder the following operations. That is, ifLandPare context-free languages, the following languages are context-free as well:
The context-free languages are not closed under intersection. This can be seen by taking the languagesA={anbncm∣m,n≥0}{\displaystyle A=\{a^{n}b^{n}c^{m}\mid m,n\geq 0\}}andB={ambncn∣m,n≥0}{\displaystyle B=\{a^{m}b^{n}c^{n}\mid m,n\geq 0\}}, which are both context-free.[note 3]Their intersection isA∩B={anbncn∣n≥0}{\displaystyle A\cap B=\{a^{n}b^{n}c^{n}\mid n\geq 0\}}, which can be shown to be non-context-free by thepumping lemma for context-free languages. As a consequence, context-free languages cannot be closed under complementation, as for any languagesAandB, their intersection can be expressed by union and complement:A∩B=A¯∪B¯¯{\displaystyle A\cap B={\overline {{\overline {A}}\cup {\overline {B}}}}}. In particular, context-free language cannot be closed under difference, since complement can be expressed by difference:L¯=Σ∗∖L{\displaystyle {\overline {L}}=\Sigma ^{*}\setminus L}.[12]
However, ifLis a context-free language andDis a regular language then both their intersectionL∩D{\displaystyle L\cap D}and their differenceL∖D{\displaystyle L\setminus D}are context-free languages.[13]
In formal language theory, questions about regular languages are usually decidable, but ones about context-free languages are often not. It is decidable whether such a language is finite, but not whether it contains every possible string, is regular, is unambiguous, or is equivalent to a language with a different grammar.
The following problems areundecidablefor arbitrarily givencontext-free grammarsA and B:
The following problems aredecidablefor arbitrary context-free languages:
According to Hopcroft, Motwani, Ullman (2003),[25]many of the fundamental closure and (un)decidability properties of context-free languages were shown in the 1961 paper ofBar-Hillel, Perles, and Shamir[26]
The set{anbncndn|n>0}{\displaystyle \{a^{n}b^{n}c^{n}d^{n}|n>0\}}is acontext-sensitive language, but there does not exist a context-free grammar generating this language.[27]So there exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ thepumping lemma for context-free languages[26]or a number of other methods, such asOgden's lemmaorParikh's theorem.[28]
|
https://en.wikipedia.org/wiki/Context-free_language
|
Adomain-specific language(DSL) is acomputer languagespecialized to a particular applicationdomain. This is in contrast to ageneral-purpose language(GPL), which is broadly applicable across domains. There are a wide variety of DSLs, ranging from widely used languages for common domains, such asHTMLfor web pages, down to languages used by only one or a few pieces of software, such asMUSHsoft code. DSLs can be further subdivided by the kind of language, and include domain-specificmarkuplanguages, domain-specificmodelinglanguages(more generally,specification languages), and domain-specificprogramminglanguages. Special-purpose computer languages have always existed in the computer age, but the term "domain-specific language" has become more popular due to the rise ofdomain-specific modeling. Simpler DSLs, particularly ones used by a single application, are sometimes informally calledmini-languages.
The line between general-purpose languages and domain-specific languages is not always sharp, as a language may have specialized features for a particular domain but be applicable more broadly, or conversely may in principle be capable of broad application but in practice used primarily for a specific domain. For example,Perlwas originally developed as a text-processing and glue language, for the same domain asAWKandshell scripts, but was mostly used as a general-purpose programming language later on. By contrast,PostScriptis aTuring-completelanguage, and in principle can be used for any task, but in practice is narrowly used as apage description language.
The design and use of appropriate DSLs is a key part ofdomain engineering, by using a language suitable to the domain at hand – this may consist of using an existing DSL or GPL, or developing a new DSL.Language-oriented programmingconsiders the creation of special-purpose languages for expressing problems as standard part of the problem-solving process. Creating a domain-specific language (with software to support it), rather than reusing an existing language, can be worthwhile if the language allows a particular type of problem or solution to be expressed more clearly than an existing language would allow and the type of problem in question reappears sufficiently often. Pragmatically, a DSL may be specialized to a particular problem domain, a particular problem representation technique, a particular solution technique, or other aspects of a domain.
A domain-specific language is created specifically to solve problems in a particular domain and is not intended to be able to solve problems outside of it (although that may be technically possible). In contrast, general-purpose languages are created to solve problems in many domains. The domain can also be a business area. Some examples of business areas include:
A domain-specific language is somewhere between a tiny programming language and ascripting language, and is often used in a way analogous to aprogramming library. The boundaries between these concepts are quite blurry, much like the boundary between scripting languages and general-purpose languages.
Domain-specific languages are languages (or often, declared syntaxes or grammars) with very specific goals in design and implementation. A domain-specific language can be one of a visual diagramming language, such as those created by theGeneric Eclipse Modeling System, programmatic abstractions, such as theEclipse Modeling Framework, or textual languages. For instance, the command line utilitygrephas aregular expressionsyntax which matches patterns in lines of text. Thesedutility defines a syntax for matching and replacing regular expressions. Often, these tiny languages can be used together inside ashellto perform more complex programming tasks.
The line between domain-specific languages andscripting languagesis somewhat blurred, but domain-specific languages often lack low-level functions for filesystem access, interprocess control, and other functions that characterize full-featured programming languages, scripting or otherwise. Many domain-specific languages do not compile tobyte-codeor executable code, but to various kinds of media objects: GraphViz exports toPostScript,GIF,JPEG, etc., whereCsoundcompiles to audio files, and a ray-tracing domain-specific language likePOVcompiles to graphics files.
Adata definition languagelikeSQLpresents an interesting case: it can be deemed a domain-specific language because it is specific to a specific domain (in SQL's case, accessing and managing relational databases), and is often called from another application, but SQL has more keywords and functions than many scripting languages, and is often thought of as a language in its own right, perhaps because of the prevalence of database manipulation in programming and the amount of mastery required to be an expert in the language.
Further blurring this line, many domain-specific languages have exposed APIs, and can be accessed from other programming languages without breaking the flow of execution or calling a separate process, and can thus operate as programming libraries.
Some domain-specific languages expand over time to include full-featured programming tools, which further complicates the question of whether a language is domain-specific or not. A good example is thefunctional languageXSLT, specifically designed for transforming one XML graph into another, which has been extended since its inception to allow (particularly in its 2.0 version) for various forms of filesystem interaction, string and date manipulation, and data typing.
Inmodel-driven engineering, many examples of domain-specific languages may be found likeOCL, a language for decorating models with assertions orQVT, a domain-specific transformation language. However, languages likeUMLare typically general-purpose modeling languages.
To summarize, an analogy might be useful: a Very Little Language is like a knife, which can be used in thousands of different ways, from cutting food to cutting down trees.[clarification needed]A domain-specific language is like an electric drill: it is a powerful tool with a wide variety of uses, but a specific context, namely, putting holes in things. A General Purpose Language is a complete workbench, with a variety of tools intended for performing a variety of tasks. Domain-specific languages should be used by programmers who, looking at their current workbench, realize they need a better drill and find that a particular domain-specific language provides exactly that.[citation needed]
DSLs implemented via an independent interpreter or compiler are known asExternal Domain Specific Languages. Well known examples include TeX or AWK. A separate category known asEmbedded (or Internal) Domain Specific Languagesare typically implemented within a host language as a library and tend to be limited to the syntax of the host language, though this depends on host language capabilities.[1]
There are several usage patterns for domain-specific languages:[2][3]
Many domain-specific languages can be used in more than one way.[citation needed]DSL code embedded in a host language may have special syntax support, such as regexes in sed, AWK, Perl or JavaScript, or may be passed as strings.
Adopting a domain-specific language approach to software engineering involves both risks and opportunities. The well-designed domain-specific language manages to find the proper balance between these.
Domain-specific languages have important design goals that contrast with those of general-purpose languages:
In programming,idiomsare methods imposed by programmers to handle common development tasks, e.g.:
General purpose programming languages rarely support such idioms, but domain-specific languages can describe them, e.g.:
Examples of domain-specific programming languages includeHTML,Logofor pencil-like drawing,VerilogandVHDLhardware description languages,MATLABandGNU Octavefor matrix programming,Mathematica,MapleandMaximaforsymbolic mathematics,Specification and Description Languagefor reactive and distributed systems,spreadsheetformulas and macros,SQLforrelational databasequeries,YACCgrammars for creatingparsers,regular expressionsfor specifyinglexers, theGeneric Eclipse Modeling Systemfor creating diagramming languages,Csoundfor sound and music synthesis, and the input languages ofGraphVizandGrGen, software packages used forgraph layoutandgraph rewriting,Hashicorp Configuration Languageused forTerraformand otherHashicorptools,Puppetalso has its ownconfiguration language.
The GML scripting language used byGameMaker Studiois a domain-specific language targeted at novice programmers to easily be able to learn programming. While the language serves as a blend of multiple languages includingDelphi,C++, andBASIC. Most of functions in that language after compiling in fact calls runtime functions written in language specific for targeted platform, so their final implementation is not visible to user. The language primarily serves to make it easy for anyone to pick up the language and develop a game, and thanks to GM runtime which handles main game loop and keeps implementation of called functions, few lines of code is required for simplest game, instead of thousands.
ColdFusion's associated scripting language is another example of a domain-specific language for data-driven websites.
This scripting language is used to weave together languages and services such as Java, .NET, C++, SMS, email, email servers, http, ftp, exchange, directory services, and file systems for use in websites.
TheColdFusion Markup Language(CFML) includes a set of tags that can be used in ColdFusion pages to interact with data
sources, manipulate data, and display output. CFML tag syntax is similar to HTML element syntax.
FilterMeister is a programming environment, with a programming language that is based on C, for the specific purpose of creatingPhotoshop-compatible image processing filter plug-ins; FilterMeister runs as a Photoshop plug-in itself and it can load and execute scripts or compile and export them as independent plug-ins.
Although the FilterMeister language reproduces a significant portion of the C language and function library, it contains only those features which can be used within the context of Photoshop plug-ins and adds a number of specific features only useful in this specific domain.
TheTemplatefeature ofMediaWikiis an embedded domain-specific language whose fundamental purpose is to support the creation ofpage templatesand thetransclusion(inclusion by reference) of MediaWiki pages into other MediaWiki pages.
There has been much interest in domain-specific languages to improve the productivity and quality ofsoftware engineering. Domain-specific language could possibly provide a robust set of tools for efficient software engineering. Such tools are beginning to make their way into the development of critical software systems.
The Software Cost Reduction Toolkit[6]is an example of this. The toolkit is a suite of utilities including a specification editor to create arequirements specification, a dependency graph browser to display variable dependencies, aconsistency checkerto catch missing cases inwell-formed formulasin the specification, amodel checkerand atheorem proverto check program properties against the specification, and an invariant generator that automatically constructs invariants based on the requirements.
A newer development islanguage-oriented programming, an integrated software engineeringmethodologybased mainly on creating, optimizing, and using domain-specific languages.
Complementinglanguage-oriented programming, as well as all other forms of domain-specific languages, are the class of compiler writing tools calledmetacompilers. A metacompiler is not only useful for generatingparsersandcode generatorsfor domain-specific languages, but ametacompileritself compiles a domain-specificmetalanguagespecifically designed for the domain ofmetaprogramming.
Besides parsing domain-specific languages, metacompilers are useful for generating a wide range of software engineering and analysis tools. The meta-compiler methodology is often found inprogram transformation systems.
Metacompilers that played a significant role in both computer science and the computer industry includeMeta-II,[7]and its descendantTreeMeta.[8]
UnrealandUnreal Tournamentunveiled a language calledUnrealScript. This allowed for rapid development of modifications compared to the competitorQuake(using theId Tech 2engine). The Id Tech engine used standardCcode meaning C had to be learned and properly applied, while UnrealScript was optimized for ease of use and efficiency. Similarly, the development of more recent games introduced their own specific languages, one more common example isLuafor scripting.[citation needed]
Variousbusiness rules engineshave been developed for automating policy and business rules used in both government and private industry.ILOG,Oracle Policy Automation, DTRules,Droolsand others provide support for DSLs aimed to support various problem domains. DTRules goes so far as to define an interface for the use of multiple DSLs within a rule set.
The purpose of business rules engines is to define a representation of business logic in as human-readable fashion as possible. This allows bothsubject-matter expertsand developers to work with and understand the same representation of the business logic. Most rules engines provide both an approach to simplifying the control structures for business logic (for example, using declarative rules ordecision tables) coupled with alternatives to programming syntax in favor of DSLs.
Statistical modelers have developed domain-specific languages such asR(an implementation of theSlanguage),Bugs,Jags, andStan. These languages provide a syntax for describing a Bayesian model and generate a method for solving it using simulation.
Generate object handling and services based on anInterface Description Languagefor a domain-specific language such as JavaScript for web applications, HTML for documentation, C++ for high-performance code, etc. This is done by cross-language frameworks such asApache ThriftorGoogle Protocol Buffers.
Gherkinis a language designed to define test cases to check the behavior of software, without specifying how that behavior is implemented. It is meant to be read and used by non-technical users using a natural language syntax and aline-oriented design. The tests defined with Gherkin must then be implemented in a general programming language. Then, the steps in a Gherkin program acts as a syntax for method invocation accessible to non-developers.
Other prominent examples of domain-specific languages include:
Some of the advantages:[2][3]
Some of the disadvantages:
|
https://en.wikipedia.org/wiki/Domain-specific_language
|
End-user development(EUD) orend-user programming(EUP) refers to activities and tools that allowend-users– people who are not professional software developers – toprogram computers. People who are not professional developers can use EUD tools to create or modifysoftware artifacts(descriptions of automated behavior) and complex data objects without significant knowledge of aprogramming language. In 2005 it was estimated (using statistics from the U.S.Bureau of Labor Statistics) that by 2012 there would be more than 55 million end-user developers in the United States, compared with fewer than 3 million professional programmers.[1]Various EUD approaches exist, and it is an activeresearch topicwithin the field ofcomputer scienceandhuman-computer interaction. Examples includenatural language programming,[2][3]spreadsheets,[4]scripting languages(particularly in an office suite or art application),visual programming, trigger-action programming andprogramming by example.
The most popular EUD tool is thespreadsheet.[4][5]Due to their unrestricted nature, spreadsheets allow relatively un-sophisticated computer users to write programs that represent complex data models, while shielding them from the need to learn lower-level programming languages.[6]Because of their common use in business, spreadsheet skills are among the most beneficial skills for a graduate employee to have, and are therefore the most commonly sought after[7]In the United States of America alone, there are an estimated 13 million end-user developers programming with spreadsheets[8]
Theprogramming by example(PbE) approach reduces the need for the user to learn the abstractions of a classic programming language. The user instead introduces some examples of the desired results or operations that should be performed on the data, and the PbE system infers some abstractions corresponding to a program that produces this output, which the user can refine. New data may then be introduced to the automatically created program, and the user can correct any mistakes made by the program in order to improve its definition.Low-code development platformsare also an approach to EUD.
One evolution in this area has considered the use of mobile devices to support end-user development activities. In this case previous approaches for desktop applications cannot be simply reproposed, given the specific characteristics of mobile devices. Desktop EUD environments lack the advantages of enabling end users to create applications opportunistically while on the move.[9]
More recently, interest in how to exploit EUD to support development of Internet of Things applications has increased. In this area trigger-action programming seems a promising approach.[10]
Lessons learned from EUD solutions can significantly influence thesoftware life cyclesforcommercial software products, in-houseintranet/extranetdevelopments andenterprise applicationdeployments.
Roughly 40 vendors now offer solutions targeted at end users designed to reduce programming efforts. These solutions do not require traditional programming and may be based around relatively narrow functionality, e.g. contract management, customer relationships management, issue and bug tracking. Often referred to as low code development platforms, web based interactions guide a user to develop an application in as little as 40–80 hours.[11][circular reference]
Lieberman et al. propose the following definition:[12]
End-User Development can be defined as a set of methods, techniques, and tools that allow users of software systems, who are acting as non-professional software developers, at some point to create, modify or extend a software artifact.
Ko et al. propose the following definition:[13]
End-user programming is programming to achieve the result of a program primarily for personal, rather [than] public use.
Artifacts defined by end users may be objects describing some automated behavior or control sequence, such as database requests or grammar rules,[14]which can be described with programming paradigms such asprogramming by demonstration,programming with
examples,visual programming, ormacrogeneration.[15]They can also be parameters that choose between alternative predefined behaviors of an application.[16]Other artifacts of end-user development may also refer to the creation of user-generated content such as annotations, which may be or not computationally interpretable (i.e. can be processed by associated automated functions).[17]
Examples of end-user development include the creation and modification of:
According toSutcliffe,[24]EUD essentially outsources development effort to the end user. Because there is always some effort to learn an EUD tool, the users' motivation depends on their confidence that it will empower their work, save time on the job or raise productivity. In this model, the benefits to users are initially based on marketing, demonstrations and word-of-mouth. Once the technology is put into use, experience of actual benefits becomes the key motivator.
This study defines costs as the sum of:
The first and second costs are incurred once during acquisition, whereas the third and fourth are incurred every time an application is developed. Benefits (which may be perceived or actual) are seen as:
Many end-user development activities are collaborative in nature, including collaboration between professional developers and end-user developers and collaboration among end-user developers.
Mutual development[25]is a technique where professional developers and end-user developers work together in creating software solutions. In mutual development, the professional developers often “under design” the system and provide the tools to allow the “owners of problems[26]" to create the suitable solution at use time for their needs, objectives and situational contexts.[27]Then the communication between professional developers and end-user developers can often stimulate formalizing ad hoc modifications by the end users into software artifacts, transforming end-user developed solutions into commercial product features with impacts beyond local solutions.
In this collaboration, various approaches such as the Software Shaping Workshop[28]are proposed to bridge the communication gap between professional developers and end-user developers. These approaches often provide translucency according to the social translucence model,[29]enabling everyone in the collaboration to be aware of changes made by others and to be held accountable of their actions because of the awareness.
Besides programming collaboration platforms like GitHub, which are mostly utilized by expert developers due to their steep learning curve, collaborations among end-user developers often take place on wiki platforms where the software artifacts created are shared. End-user development is also often used for creating automation scripts or interactive tutorials for sharing “how-to” knowledge. Examples of such application include CoScripter[30]and HILC.[31]In such applications, user can create scripts for tasks using pseudo-natural language or via programming by demonstration. The users can choose to upload the script to a wiki style repository of scripts. On this wiki, users can browse available scripts and extend existing scripts to support additional parameters, to handle additional conditions or to operate on additional objects.
Online and offline communities of end-user developers have also been formed, where end-user developers can collaboratively solve EUD problems of shared interest or for mutual benefit. In such communities, local experts spread expertise and advice. Community members also provide social support for each other to support the collaborative construction of software.[32]
Commentators have been concerned that end users do not understand how to test and secure their applications. Warren Harrison, a professor of computer science at Portland State University, wrote:[33]
It’s simply unfathomable that we could expect security... from the vast majority of software applications out there when they’re written with little, if any, knowledge of generally accepted good practices such as specifying before coding, systematic testing, and so on.... How many X for Complete Idiots (where "X" is your favorite programming language) books are out there? I was initially amused by this trend, but recently I’ve become uneasy thinking about where these dabblers are applying their newfound knowledge.
This viewpoint assumes that all end users are equally naive when it comes to understanding software, although Pliskin and Shoval argue this is not the case, that sophisticated end users are capable of end-user development.[34]However, compared with expert programmers, end-user programmers rarely have the time or interest in systematic and disciplined software engineering activities,[35]which makes ensuring the quality of the software artifact produced by end-user development particularly challenging.
In response to this, the study ofend-user software engineeringhas emerged. It is concerned with issues beyond end-user development, whereby end users become motivated to consider issues such as reusability, security and verifiability when developing their solutions.[36]
An alternative scenario is that end users or their consultants employdeclarativetools that support rigorous business and security rules at the expense of performance and scalability; tools created using EUD will typically have worse efficiency than those created with professional programming environments. Though separating functionality from efficiency is a validseparation of concerns, it can lead to a situation where end users will complete and document therequirements analysisandprototypingof the tool, without the involvement ofbusiness analysts. Thus, users will define the functions required before these experts have a chance to consider the limitations of a specificapplicationorsoftware framework. Senior management support for such end-user initiatives depends on their attitude to existing or potentialvendor lock-in.
|
https://en.wikipedia.org/wiki/End-user_programming
|
Incomputer science,automatic programming[1]is a type ofcomputer programmingin which some mechanism generates acomputer program, to allow humanprogrammersto write the code at a higher abstraction level.
There has been little agreement on the precise definition of automatic programming, mostly because its meaning has changed over time.David Parnas, tracing the history of "automatic programming" in published research, noted that in the 1940s it described automation of the manual process of punchingpaper tape. Later it referred to translation ofhigh-level programming languageslikeFortranandALGOL. In fact, one of the earliest programs identifiable as acompilerwas calledAutocode.Parnasconcluded that "automatic programming has always been aeuphemismfor programming in a higher-level language than was then available to the programmer."[2]
Program synthesisis one type of automatic programming where a procedure is created from scratch, based on mathematical requirements.
Mildred Koss, an earlyUNIVACprogrammer, explains: "Writing machine code involved several tedious steps—breaking down a process into discrete instructions, assigning specific memory locations to all the commands, and managing the I/O buffers. After following these steps to implement mathematical routines, a sub-routine library, and sorting programs, our task was to look at the larger programming process. We needed to understand how we might reuse tested code and have the machine help in programming. As we programmed, we examined the process and tried to think of ways to abstract these steps to incorporate them into higher-level language. This led to the development of interpreters, assemblers, compilers, and generators—programs designed to operate on or produce other programs, that is,automatic programming."[3]
Generative programmingand the related termmeta-programming[4]are concepts whereby programs can be written "to manufacture software components in an automated way"[5]just as automation has improved "production of traditional commodities such as garments, automobiles, chemicals, and electronics."[6][7]
The goal is to improveprogrammerproductivity.[8]It is often related to code-reuse topics such ascomponent-based software engineering.
Source-code generationis the process of generating source code based on a description of the problem[9]or anontologicalmodel such as a template and is accomplished with aprogramming toolsuch as atemplate processoror anintegrated development environment(IDE). These tools allow the generation ofsource codethrough any of various means.
Modern programming languages are well supported by tools likeJson4Swift(Swift) andJson2Kotlin(Kotlin).
Programs that could generateCOBOLcode include:
These application generators supported COBOL inserts and overrides.
Amacroprocessor, such as theC preprocessor, which replaces patterns in source code according to relatively simple rules, is a simple form of source-code generator.Source-to-sourcecode generation tools also exist.[11][12]
Large language modelssuch asChatGPTare capable of generating a program's source code from a description of the program given in a natural language.[13]
Manyrelational database systemsprovide a function that will export the content of the database asSQLdata definitionqueries, which may then be executed to re-import the tables and their data, or migrate them to another RDBMS.
Alow-code development platform(LCDP) is software that provides an environmentprogrammersuse to createapplication softwarethroughgraphical user interfacesand configuration instead of traditionalcomputer programming.
|
https://en.wikipedia.org/wiki/Automatic_programming#Source-code_generation
|
Avery high-level programming language(VHLL) is aprogramming languagewith a very high level ofabstraction, used primarily as a professional programmer productivity tool.[citation needed]
VHLLs are usuallydomain-specific languages, limited to a very specific application, purpose, or type of task, and they are oftenscripting languages(especially extension languages), controlling a specific environment. For this reason, very high-level programming languages are often referred to as goal-oriented programming languages.[citation needed]
The term VHLL was used in the 1990s for what are today more often calledhigh-level programming languages(not "very") used for scripting, such asPerl,Python,PHP,Ruby, andVisual Basic.[1][2]
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Very_high-level_programming_language
|
Thehistory of natural language processingdescribes the advances ofnatural language processing. There is some overlap with thehistory of machine translation, thehistory of speech recognition, and thehistory of artificial intelligence.
The history of machine translation dates back to the seventeenth century, when philosophers such asLeibnizandDescartesput forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.
The first patents for "translating machines" were applied for in the mid-1930s. One proposal, byGeorges Artsrouniwas simply an automatic bilingual dictionary usingpaper tape. The other proposal, byPeter Troyanskii, a Russian, was more detailed.Troyanskiproposal included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based onEsperanto.
In 1950,Alan Turingpublished his famous article "Computing Machinery and Intelligence" which proposed what is now called theTuring testas a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably — on the basis of the conversational content alone — between the program and a real human.
In 1957,Noam Chomsky’sSyntactic Structuresrevolutionized Linguistics with 'universal grammar', a rule-based system of syntactic structures.[1]
TheGeorgetown experimentin 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.[2]However, real progress was much slower, and after theALPAC reportin 1966, which found that ten years long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the firststatistical machine translationsystems were developed.
Some notably successful NLP systems developed in the 1960s wereSHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies.
In 1969Roger Schankintroduced theconceptual dependency theoryfor natural language understanding.[3]This model, partially influenced by the work ofSydney Lamb, was extensively used by Schank's students atYale University, such as Robert Wilensky, Wendy Lehnert, andJanet Kolodner.
In 1970, William A. Woods introduced theaugmented transition network(ATN) to represent natural language input.[4]Instead ofphrase structure rulesATNs used an equivalent set offinite-state automatathat were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years. During the 1970s many programmers began to write 'conceptual ontologies', which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, manychatterbotswere written includingPARRY,Racter, andJabberwacky.
In recent years, advancements in deep learning and large language models have significantly enhanced the capabilities of natural language processing, leading to widespread applications in areas such as healthcare, customer service, and content generation.[5]
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction ofmachine learningalgorithms for language processing. This was due both to the steady increase in computational power resulting fromMoore's lawand the gradual lessening of the dominance ofChomskyantheories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguisticsthat underlies the machine-learning approach to language processing.[6]Some of the earliest-used machine learning algorithms, such asdecision trees, produced systems of hard if-then rules similar to existing hand-written rules. Increasingly, however, research has focused onstatistical models, which make soft,probabilisticdecisions based on attachingreal-valuedweights to the features making up the input data. Thecache language modelsupon which manyspeech recognitionsystems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.
The emergence of statistical approaches was aided by both increase in computing power and the availability of large datasets. At that time, large multilingual corpora were starting to emerge. Notably, some were produced by theParliament of Canadaand theEuropean Unionas a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government.
Many of the notable early successes occurred in the field ofmachine translation. In 1993, theIBM alignment modelswere used forstatistical machine translation.[7]Compared to previous machine translation systems, which were symbolic systems manually coded by computational linguists, these systems were statistical, which allowed them to automatically learn from largetextual corpora. Though these systems do not work well in situations where only small corpora is available, so data-efficient methods continue to be an area of research and development.
In 2001, a one-billion-word large text corpus, scraped from the Internet, referred to as "very very large" at the time, was used for worddisambiguation.[8]
To take advantage of large, unlabelled datasets, algorithms were developed forunsupervisedandself-supervised learning. Generally, this task is much more difficult thansupervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of theWorld Wide Web), which can often make up for the inferior results.
In 1990, theElman network, using arecurrent neural network, encoded each word in a training set as a vector, called aword embedding, and the whole vocabulary as avector database, allowing it to perform such tasks as sequence-predictions that are beyond the power of a simplemultilayer perceptron. A shortcoming of the static embeddings was that they didn't differentiate between multiple meanings ofhomonyms.[9]
|
https://en.wikipedia.org/wiki/History_of_natural_language_processing
|
Wolfram Mathematicais a software system with built-in libraries for several areas of technical computing that allowsmachine learning,statistics,symbolic computation, data manipulation, network analysis, time series analysis,NLP,optimization, plottingfunctionsand various types of data, implementation ofalgorithms, creation ofuser interfaces, and interfacing with programs written in otherprogramming languages. It was conceived byStephen Wolfram, and is developed byWolfram Researchof Champaign, Illinois.[8][9]TheWolfram Languageis the programming language used inMathematica.[10]Mathematica 1.0 was released on June 23, 1988 inChampaign, IllinoisandSanta Clara, California.[11][12][13]Mathematica's Wolfram Language is fundamentally based on Lisp; for example, the Mathematica command Most is identically equal to the Lisp command butlast. There is a substantial literature on the development of computer algebra systems (CAS).
Mathematica is split into two parts: the kernel and thefront end. The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end.
The original front end, designed byTheodore Gray[14]in 1988, consists of anotebook interfaceand allows the creation and editing ofnotebook documentsthat can contain code, plaintext, images, and graphics.[15]
Code development is also supported through support in a range of standardintegrated development environment(IDE) includingEclipse,[16]IntelliJ IDEA,[17]Atom,Vim,Visual Studio CodeandGit. The Mathematica Kernel also includes a command line front end.[18]
Other interfaces include JMath,[19]based onGNU Readlineand WolframScript[20]which runs self-contained Mathematica programs (with arguments) from the UNIX command line.
Capabilities forhigh-performance computingwere extended with the introduction ofpacked arraysin version 4 (1999)[21]andsparse matrices(version 5, 2003),[22]and by adopting theGNU Multiple Precision Arithmetic Libraryto evaluate high-precision arithmetic.
Version 5.2 (2005) added automaticmulti-threadingwhen computations are performed onmulti-corecomputers.[23]This release included CPU-specific optimized libraries.[24]In addition Mathematica is supported by third party specialist acceleration hardware such asClearSpeed.[25]
In 2002,gridMathematicawas introduced to allow user levelparallel programmingon heterogeneous clusters and multiprocessor systems[26]and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such asWindows HPC Server 2008,Microsoft Compute Cluster ServerandSun Grid.
Support forCUDAandOpenCLGPUhardware was added in 2010.[27]
As of Version 14, there are 6,602 built-in functions and symbols in the Wolfram Language.[28]Stephen Wolfram announced the launch of the Wolfram Function Repository in June 2019 as a way for the public Wolfram community to contribute functionality to the Wolfram Language.[29]At the time of Stephen Wolfram's release announcement for Mathematica 13, there were 2,259 functions contributed as Resource Functions.[30]In addition to the Wolfram Function Repository, there is a Wolfram Data Repository with computable data and the Wolfram Neural Net Repository for machine learning.[31]
Wolfram Mathematica is the basis of the Combinatorica package, which adds discrete mathematics functionality in combinatorics and graph theory to the program.[32]
Communication with other applications can be done using a protocol called Wolfram Symbolic Transfer Protocol (WSTP). It allows communication between the Wolfram Mathematica kernel and the front end and provides a general interface between the kernel and other applications.[33]
Wolfram Research freely distributes a developer kit for linking applications written in the programming languageCto the Mathematica kernel through WSTP using J/Link.,[34]aJavaprogram that can ask Mathematica to perform computations. Similar functionality is achieved with .NET /Link,[35]but with.NETprograms instead of Java programs.
Other languages that connect to Mathematica includeHaskell,[36]AppleScript,[37]Racket,[38]Visual Basic,[39]Python,[40][41]andClojure.[42]
Mathematica supports the generation and execution ofModelicamodels forsystems modelingand connects withWolfram System Modeler.
Links are also available to many third-party software packages and APIs.[43]
Mathematica can also capture real-time data from a variety of sources[44]and can read and write to public blockchains (Bitcoin,Ethereum, and ARK).[45]
It supports import and export of over 220 data, image, video, sound,computer-aided design(CAD),geographic information systems(GIS),[46]document, and biomedical formats.
In 2019, support was added for compiling Wolfram Language code toLLVM.[47]
Version 12.3 of the Wolfram Language added support forArduino.[48]
Mathematica is also integrated withWolfram Alpha, an onlineanswer enginethat provides additional data, some of which is kept updated in real time, for users who use Mathematica with an internet connection. Some of the data sets include astronomical, chemical, geopolitical, language, biomedical, airplane, and weather data, in addition to mathematical data (such as knots and polyhedra).[49]
BYTEin 1989 listed Mathematica as among the "Distinction" winners of the BYTE Awards, stating that it "is another breakthrough Macintosh application ... it could enable you to absorb the algebra and calculus that seemed impossible to comprehend from a textbook".[50]Mathematica has been criticized for being closed source.[51]Wolfram Research claims keeping Mathematica closed source is central to its business model and the continuity of the software.[52][53]
|
https://en.wikipedia.org/wiki/Mathematica
|
Siri(/ˈsɪri/ⓘSEER-ee,backronym: Speech Interpretation and Recognition Interface) is a digital assistant purchased, developed, and popularized byApple Inc., which is included in theiOS,iPadOS,watchOS,macOS,Apple TV,audioOS, andvisionOSoperating systems.[1][2]It uses voice queries, gesture based control, focus-tracking and anatural-language user interfaceto answer questions, make recommendations, and perform actions by delegating requests to a set ofInternetservices. With continued use, it adapts to users' individual language usages, searches, and preferences, returning individualized results.
Siri is aspin-offfrom a project developed by theSRI InternationalArtificial Intelligence Center. Itsspeech recognitionengine was provided byNuance Communications, and it uses advancedmachine learningtechnologies to function. Its original American, British, and Australianvoice actorsrecorded their respective voices around 2005, unaware of the recordings' eventual usage. Siri was released as an app for iOS in February 2010. Two months later, Apple acquired it and integrated it into theiPhone 4sat its release on 4 October 2011, removing the separate app from the iOSApp Store. Siri has since been an integral part of Apple's products, having been adapted into other hardware devices including neweriPhonemodels,iPad,iPod Touch,Mac,AirPods,Apple TV,HomePod, andApple Vision Pro.
Siri supports a wide range of user commands, including performing phone actions, checking basic information, scheduling events and reminders, handling device settings, searching the Internet, navigating areas, finding information on entertainment, and being able to engage with iOS-integrated apps. With the release ofiOS 10, in 2016, Apple opened up limited third-party access to Siri, including third-party messaging apps, as well as payments,ride-sharing, andInternet callingapps. With the release ofiOS 11, Apple updated Siri's voice and added support for follow-up questions, language translation, and additional third-party actions.iOS 17andiPadOS 17enabled users to activate Siri by simply saying "Siri", while the previous command, "Hey Siri", is still supported. Siri was upgraded to usingApple IntelligenceoniOS 18,iPadOS 18, andmacOS Sequoia, replacing the logo.
Siri's original release on iPhone 4s in October 2011 received mixed reviews. It received praise for itsvoice recognitionand contextual knowledge of user information, including calendar appointments, but was criticized for requiring stiff user commands and having a lack of flexibility. It was also criticized for lacking information on certain nearby places and for its inability to understand certainEnglish accents. In 2016 and 2017, a number of media reports said that Siri lacked innovation, particularly against new competing voice assistants. The reports concerned Siri's limited set of features, "bad" voice recognition, and undeveloped service integrations as causing trouble for Apple in the field ofartificial intelligenceand cloud-based services; the basis for the complaints reportedly due to stifled development, as caused by Apple's prioritization of userprivacyand executive power struggles within the company.[3]Its launch was also overshadowed by the death ofSteve Jobs, which occurred one day after the launch.
Siri is aspin-outfrom theStanford Research Institute's Artificial Intelligence Center and is an offshoot of the USDefense Advanced Research Projects Agency's (DARPA)-fundedCALOproject.[4]SRI Internationalused the NABC Framework to define the value proposition for Siri.[5]It was co-founded by Dag Kittlaus,Tom Gruber, andAdam Cheyer.[4]Kittlaus named Siri after a co-worker inNorway;the nameis a short form of the nameSigrid, fromOld NorseSigríðr, composed of the elementssigr"victory" andfríðr"beautiful".[6]
Siri'sspeech recognitionengine was provided byNuance Communications, a speech technology company.[7]Neither Apple nor Nuance acknowledged this for years,[8][9]until Nuance CEO Paul Ricci confirmed it at a 2013 technology conference.[7]The speech recognition system uses sophisticatedmachine learningtechniques, includingconvolutional neural networksandlong short-term memory.[10]
The initial Siri prototype was implemented using the Active platform, a joint project between the Artificial Intelligence Center ofSRI Internationaland the Vrai Group atEcole Polytechnique Fédérale de Lausanne. The Active platform was the focus of a Ph.D. thesis led byDidier Guzzoni, who joined Siri as its chief scientist.[11]
Siri was acquired by Apple Inc. in April 2010 under the direction of Steve Jobs.[12]Apple's first notion of a digital personal assistant appeared in a 1987 concept video,Knowledge Navigator.[13][14]
Siri has been updated with enhanced capabilities made possible by Apple Intelligence. InmacOS Sequoia,iOS 18, andiPadOS 18, Siri features an updated user interface, improved natural language processing, and the option to interact via text by double tapping the home bar without enabling the feature in the Accessibility menu on iOS and iPadOS. According to Apple: it adds the ability for Siri to use the context of device activities to make conversations more natural; Siri can give users device support and will have larger app support via the Siri App Intents API; Siri will be able to deliver intelligence that's tailored to the user and their on-device information using personal context. For example, a user can say, "When is Mom's flight landing?" and Siri will find the flight details and try to cross-reference them with real-time flight tracking to give an arrival time.[15][16]For more day to day interactions with Apple devices, Siri will now summarize messages (on more apps than just Messages, such as Discord and Slack). According to users, this feature can be helpful but can also be inappropriate in certain situations.
The original American voice of Siri was recorded in July 2005 bySusan Bennett, who was unaware it would eventually be used for the voice assistant.[17][18]A report fromThe Vergein September 2013 about voice actors, their work, and machine learning developments, hinted that Allison Dufty was the voice behind Siri,[19][20]but this was disproven when Dufty wrote on her website that she was "absolutely, positivelynotthe voice of Siri."[18]Citing growing pressure, Bennett revealed her role as Siri in October, and her claim was confirmed by Ed Primeau, an Americanaudio forensicsexpert.[18]Apple has never acknowledged it.[18]
The original British male voice was provided byJon Briggs, a former technology journalist and for 12 years narrated for the hitBBCquiz showThe Weakest Link.[17]After discovering he was Siri's voice by watching television, he first spoke about the role in November 2011. He acknowledged that the voice work was done "five or six years ago", and that he didn't know how the recordings would be used.[21][22]
The original Australian voice was provided byKaren Jacobsen, avoice-overartist known in Australia as theGPSgirl.[17][23]
In an interview between all three voice actors andThe Guardian, Briggs said that "the original system was recorded for a US company called Scansoft, who were then bought by Nuance. Apple simply licensed it."[23]
ForiOS 11, Apple auditioned hundreds of candidates to find new female voices, then recorded several hours of speech, including different personalities and expressions, to build a newtext-to-speechvoice based ondeep learningtechnology.[24]In February 2022, Apple added Quinn, its first gender-neutral voice as a fifth user option, to the iOS 15.4 developer release.[25]
Siri released as astand-alone applicationfor the iOS operating system in February 2010, and at the time, the developers were also intending to release Siri forAndroidandBlackBerrydevices.[26]Two months later, Apple acquired Siri.[27][28][29]On October 4, 2011, Apple introduced theiPhone 4Swith abeta versionof Siri.[30][31]After the announcement, Apple removed the existing standalone Siri app fromApp Store.[32]TechCrunchwrote that, though the Siri app supportsiPhone 4, its removal from App Store might also have had a financial aspect for the company, in providing an incentive for customers to upgrade devices.[32]Third-party developer Steven Troughton-Smith, however, managed toportSiri to iPhone 4, though without being able to communicate with Apple's servers.[33]A few days later, Troughton-Smith, working with an anonymous person nicknamed "Chpwn", managed to fully hack Siri, enabling its full functionalities on iPhone 4 andiPod Touchdevices.[34]Additionally, developers were also able to successfully create and distribute legal ports of Siri to any device capable of running iOS 5, though aproxy serverwas required for Apple server interaction.[35]
Over the years, Apple has expanded the line of officially supported products, including newer iPhone models,[36]as well as iPad support in June 2012,[37]iPod Touch support in September 2012,[38]Apple TV support, and the stand-aloneSiri Remote, in September 2015,[39]Mac and AirPods support in September 2016,[40][41]and HomePod support in February 2018.[42][43]
Apple offers a wide range of voice commands to interact with Siri, including, but not limited to:[44]
Siri also offers numerous pre-programmed responses to amusing questions. Such questions include "What is the meaning of life?" to which Siri may reply "All evidence to date suggests it's chocolate"; "Why am I here?", to which it may reply "I don't know. Frankly, I've wondered that myself"; and "Will you marry me?", to which it may respond with "MyEnd User Licensing Agreementdoes not covermarriage. My apologies."[48][49]
Initially limited to female voices, Apple announced in June 2013 that Siri would feature a gender option, adding a male voice counterpart.[50]
In September 2014, Apple added the ability for users to speak "Hey Siri" to enable the assistant without the requirement of physically handling the device.[51]
In September 2015, the "Hey Siri" feature was updated to include individualized voice recognition, a presumed effort to prevent non-owner activation.[52][53]
With the announcement ofiOS 10in June 2016, Apple opened up limited third-party developer access to Siri through a dedicatedapplication programming interface(API). The API restricts the usage of Siri to engaging with third-party messaging apps, payment apps, ride-sharing apps, and Internet calling apps.[54][55]
In iOS 11, Siri is able to handle follow-up questions, supports language translation, and opens up to more third-party actions, including task management.[56][57]Additionally, users are able to type to Siri,[58]and a new, privacy-minded "on-device learning" technique improves Siri's suggestions by privately analyzing personal usage of different iOS applications.[59]
iOS 17 and iPadOS 17 allows users to simply say "Siri" to initiate Siri, and the virtual assistant now supports back to back requests, allowing users to issue multiple requests and conversations without reactivating it.[60]In the public beta versions of iOS 17, iPadOS 17, andmacOS Sonoma, Apple added support for bilingual queries to Siri.[61]
iOS 18,iPadOS 18andMacOS 15 Sequoiabroughtartificial intelligence, integrated withChatGPT, to Siri.[62]Apple calls this "Apple Intelligence".[63]
Siri received mixed reviews during its beta release as an integrated part of theiPhone 4Sin October 2011.
MG Siegler ofTechCrunchwrote that Siri was "great," understood much more, but had “no API that any developer can use“.[64]Writing forThe New York Times,David Poguealso praised Siri'sability to understand context[65]Jacqui Cheng ofArs Technicawrote that Apple's claims of what Siri could do were bold, and the early demos "even bolder", this was still in beta.[66]
While praising its ability to "decipher our casual language" and deliver "very specific and accurate result," sometimes even providing additional information, Cheng noted and criticized its restrictions, particularly when the language moved away from "stiffer commands" into more human interactions. One example included the phrase "Send a text to Jason, Clint, Sam, and Lee saying we're having dinner at Silver Cloud," which Siri interpreted as sending a message to Jason only, containing the text "Clint Sam and Lee saying we're having dinner at Silver Cloud." She also noted a lack of proper editability.[66]
Google's executive chairman and former chief,Eric Schmidt, conceded that Siri could pose a competitive threat to the company's core search business.[67]
Siri was criticized bypro-abortion rights organizations, including theAmerican Civil Liberties Union(ACLU) andNARAL Pro-Choice America, after users found that Siri could not provide information about the location of birth control or abortion providers nearby, sometimes directing users tocrisis pregnancy centersinstead.[68][69][70]
Natalie Kerris, a spokeswoman for Apple, toldThe New York Times that , “These are not intentional omissions…”.[71]In January 2016,Fast Companyreported that, in then-recent months, Siri had begun to confuse the word "abortion" with "adoption", citing "health experts" who stated that the situation had "gotten worse." However, at the time ofFast Company's report, the situation had changed slightly, with Siri offering "a more comprehensive list ofPlanned Parenthoodfacilities", although "Adoption clinics continue to pop up, but near the bottom of the list."[72][73]
Siri has also not been well received by some English speakers with distinctive accents, includingScottish[74]and Americans fromBostonor theSouth.[75]
In March 2012, Frank M. Fazio filed a class action lawsuit against Apple on behalf of the people who bought the iPhone 4S and felt misled about the capabilities of Siri, alleging its failure to function as depicted in Apple's Siri commercials. Fazio filed the lawsuit in California and claimed that the iPhone 4S was merely a "more expensive iPhone 4" if Siri fails to function as advertised.[76][77]On July 22, 2013, U.S. District Judge Claudia Wilken in San Francisco dismissed the suit but said the plaintiffs could amend at a later time. The reason given for dismissal was that plaintiffs did not sufficiently document enough misrepresentations by Apple for the trial to proceed.[78]
In June 2016,The Verge's Sean O'Kane wrote about the then-upcoming major iOS 10 updates, with a headline stating "Siri's big upgrades won't matter if it can't understand its users":
What Apple didn't talk about was solving Siri's biggest, most basic flaws: it's still not very good at voice recognition, and when it gets it right, the results are often clunky. And these problems look even worse when you consider that Apple now has full-fledged competitors in this space:Amazon'sAlexa,Microsoft'sCortana, and Google'sAssistant.[79]
Also writing forThe Verge,Walt Mossberghad previously questioned Apple's efforts in cloud-based services, writing:[80]
...perhaps the biggest disappointment among Apple's cloud-based services is the one it needs most today, right now: Siri. Before Apple bought it, Siri was on the road to being a robust digital assistant that could do many things, and integrate with many services—even though it was being built by a startup with limited funds and people. After Apple bought Siri, the giant company seemed to treat it as a backwater, restricting it to doing only a few, slowly increasing number of tasks, like telling you the weather, sports scores, movie and restaurant listings, and controlling the device's functions. Its unhappy founders have left Apple to build a new AI service calledViv. And, on too many occasions, Siri either gets things wrong, doesn't know the answer, or can't verbalize it. Instead, it shows you a web search result, even when you're not in a position to read it.
In October 2016,Bloombergreported that Apple had plans to unify the teams behind its various cloud-based services, including a single campus and reorganized cloud computing resources aimed at improving the processing of Siri's queries,[81]although another report fromThe Verge, in June 2017, once again called Siri's voice recognition "bad."[82]
In June 2017,The Wall Street Journalpublished an extensive report on the lack of innovation with Siri following competitors' advancement in the field of voice assistants. Noting that Apple workers' anxiety levels "went up a notch" on the announcement of Amazon's Alexa, theJournalwrote: "Today, Apple is playing catch-up in a product category it invented, increasing worries about whether the technology giant has lost some of its innovation edge." The report gave the primary causes being Apple's prioritization of user privacy, including randomly-tagged six-month Siri searches, whereas Google and Amazon keep data until actively discarded by the user,[clarification needed]and executive power struggles within Apple. Apple did not comment on the report, whileEddy Cuesaid: "Apple often uses generic data rather than user data to train its systems and has the ability to improve Siri's performance for individual users with information kept on their iPhones."[3][83]
In July 2019, a then-anonymous whistleblower and former Apple contractor Thomas le Bonniec said that Siri regularly records some of its users' conversations even when it was not activated. The recordings are sent to Apple contractors grading Siri's responses on a variety of factors. Among other things, the contractors regularly hear private conversations between doctors and patients, business and drug deals, and couples having sex. Apple did not disclose this in its privacy documentation and did not provide a way for its users to opt-in or out.[84]
In August 2019, Apple apologized, halted the Siri grading program, and said that it plans to resume "later this fall when software updates are released to [its] users".[85]The company also announced "it would no longer listen to Siri recordings without your permission".[86]iOS 13.2, released in October 2019, introduced the ability to opt out of the grading program and to delete all the voice recordings that Apple has stored on its servers.[87]Users were given the choice of whether their audio data was received by Apple or not, with the ability to change their decision as often as they like. It was then made an opt-in program.
In May 2020, Thomas le Bonniec revealed himself as the whistleblower and sent a letter to European data protection regulators, calling on them to investigate Apple's "past and present" use of Siri recordings. He argued that, even though Apple has apologized, it has never faced the consequences for its years-long grading program.[88][89]
In December 2024, Apple agreed to a $95 million class-action settlement, compensating users of Siri-enabled from the past ten years. Additionally, Apple must confirm the deletion of Siri recordings before 2019 (when the feature became opt-in) and issue new guidance on how data is collected and how users can participate in efforts to improve Siri.[90]
May 15, 2025 – Apple’s $95 Million Siri Settlement: Claim Deadline July 2, Final Hearing August 1, 2025[91]
Apple has introduced various accessibility features aimed at making its devices more inclusive for individuals with disabilities. The company provides users the opportunity to share feedback on accessibility features through email.[92]Some of the new functionalities include live speech, personal voice, Siri's atypical speech pattern recognition, and much more.[93]
Accessibility features:
Siri, like many AI systems, can perpetuate gender and racial biases through its design and functionality. According to an article fromThe Conversation, Siri "reinforces the role of women as secondary and submissive to men" due to the fact that the default is a soft, female voice.[98]Although Apple now offers a larger variety of voices with different accents and languages, this original narrative perpetuates the idea of women servicing men. Not only this but the article also explains how different settings of Siri's voice result in different responses, specifically the female voice being programmed with more flirtatious statements than the male voice. Additionally, Siri may misinterpret certain accents or dialects, particularly those spoken by people from marginalized racial or ethnic backgrounds, making it less accessible to these groups. According to an article fromThe Scientific American, Claudia Lloreda explains that non-native English speakers have to "adapt our way of speaking to interact with speech-recognition technologies."[99]Furthermore, due to repetitive "learnings" from a larger user base, Siri may unintentionally produce a Western perspective, limiting representation and furthering biases in everyday interactions. Despite these perpetuated issues, Siri does not provides several benefits as well, especially for those with disabilities that typically limit their abilities to use technology and access the internet.
The iOS version of Siri ships with a vulgar content filter; however, it is disabled by default and must be enabled by the user manually.[100]
In 2018,Ars Technicareported a new glitch that could be exploited by a user requesting the definition of "mother" be read out loud. Siri would issue a response and ask the user if they would like to hear the next definition; when the user replies with "yes," Siri would mention "mother" as being short for "motherfucker."[101]This resulted in multipleYouTubevideos featuring the responses and/or how to trigger them. Apple fixed the issue silently. The content is picked up from third-party sources such as theOxford English Dictionaryand not a supplied message from the corporation.[102]
Siri provided the voice of'PuterinThe Lego Batman Movie.[103]
|
https://en.wikipedia.org/wiki/Siri_(software)
|
WolframAlpha(/ˈwʊlf.rəm-/WUULf-rəm-) is ananswer enginedeveloped byWolfram Research.[1]It is offered as anonline servicethat answers factual queries by computing answers from externally sourced data.[2][3]
Launch preparations for WolframAlpha began on May 15, 2009, at 7:00 pmCDTwith a live broadcast onJustin.tv. The plan was to publicly launch the service a few hours later.[4][5]However, there were issues due to extreme load. The service officially launched on May 18, 2009, receiving mixed reviews.[6][7][8]
The engine is based on Wolfram's earlier productWolfram Mathematica, a technical computing platform.[4]The coding is written inWolfram Language, a general multi-paradigm[further explanation needed]programming language, and implemented inMathematica.[9]WolframAlpha gathers data from academic and commercial websites such as theCIA'sThe World Factbook, theUnited States Geological Survey, a Cornell University Library publication calledAll About Birds,Chambers Biographical Dictionary,Dow Jones, theCatalogue of Life,[1]CrunchBase,[10]Best Buy,[11]and theFAAto answer queries.[12]
On February 8, 2012, WolframAlpha Pro was released,[13]offering users additional features for a monthly subscription fee.[13][14]
Users submit queries and computation requests via a text field. WolframAlpha then computes answers and relevant visualizations from aknowledge baseofcurated,structured datathat come from other sites and books. It can respond to particularly phrasednatural languagefact-based questions. It displays its "Input interpretation" of such a question, using standardized phrases. It can also parse mathematical symbolism and respond with numerical and statistical results.[citation needed]
WolframAlpha was used to power some searches in theMicrosoftBingandDuckDuckGosearch engines but is no longer used to provide search results.[15][16]For factualquestion answering, WolframAlpha was used by Apple'sSiriin October 2011 andAmazon Alexain December 2018 for math and science queries.[17][18]Users noticed that the Wolfram Integration for Siri was changed in June 2013 to use Bing to query certain results on iOS 7.[19]Starting with iOS 17, it was reported that Wolfram for Siri no longer answers mathematical equations, instead defaulting to web search queries with no notable explanation.[20][21]WolframAlphadata types[clarification needed], sets of curated information and formulas that assist in creating, categorization, and filling of spreadsheet information, became available in July 2020 withinMicrosoft Excel.[22]The Microsoft-Wolfram partnership ended nearly two years later, in 2022, in favor of MicrosoftPower Querydata types.[23]WolframAlpha functionality inMicrosoft Excelended in June 2023.[24][25]
|
https://en.wikipedia.org/wiki/Wolfram_Alpha
|
Aconversational user interface(CUI) is auser interfacefor computers that emulates a conversation with a real human.[1]Historically, computers have relied ontext-based user interfacesandgraphical user interfaces(GUIs) (such as the user pressing a "back" button) to translate the user's desired action into commands the computer understands. While an effective mechanism of completing computing actions, there is a learning curve for the user associated with GUI.[2]Instead, CUIs provide opportunity for the user to communicate with the computer in their natural language rather than in a syntax specific commands.[3]
To do this, conversational interfaces usenatural language processing(NLP) to allow computers to understand, analyze, and create meaning from human language.[4]Unlike word processors, NLP considers the structure of human language (i.e., words make phrases; phrases make sentences which convey the idea or intent theuseris trying to invoke). The ambiguous nature of human language makes it difficult for a machine to always correctly interpret the user's requests, which is why we have seen a shift towardnatural-language understanding(NLU).[5]
NLU allows forsentiment analysisand conversational searches which allows a line of questioning to continue, with thecontextcarried throughout the conversation. NLU allows conversational interfaces to handle unstructured inputs that the human brain is able to understand such as spelling mistakes of follow-up questions.[6]For example, through leveraging NLU, a user could first ask for the population of the United States. If the user then asks "Who is the president?", the search will carry forward the context of the United States and provide the appropriate response.
Conversational interfaces have emerged as a tool for businesses to efficiently provide consumers with relevant information, in a cost-effective manner.[7]CUI provide ease of access to relevant, contextual information to the end user without the complexities and learning curve typically associated with technology.
While there are a variety of interface brands, to date, there are two main categories of conversational interfaces;voice assistantsandchatbots.[8]
Avoice user interfaceallows a user to complete an action by speaking a command. Introduced in October 2011, Apple'sSiriwas one of the first voice assistants widely adopted. Siri allowed users of iPhone to get information and complete actions on their device simply by asking Siri. In the later years, Siri was integrated with Apple'sHomePoddevices.
Further development has continued since Siri's introduction to include home based devices such asGoogle HomeorAmazon Echo(powered by Alexa) that allow users to "connect" their homes through a series ofsmart devicesto further the options of tangible actions they can complete. Users can now turn off the lights, set reminders and call their friends all with a verbal queue.
These conversational interfaces that utilize a voice assistant have become a popular way for businesses to interact with their customers as the interface removes some friction in acustomer journey. Customers no longer need to remember a long list of usernames and passwords to their various accounts; they simply link each account to Google or Amazon once, and gone are the days where you needed to wait on hold for an hour to ask a simple question.
Achatbotis a web- or mobile-based interface that allows the user to ask questions andretrieve information. This information can be generic in nature such as the Google Assistant chat window that allows for internet searches, or it can be a specific brand or service which allows the user to gain information about the status of their various accounts. Their backend systems work in the same manner as a voice assistant, with the front end utilizing a visual interface to convey information. This visual interface can be beneficial for companies that need to do more complex business transactions with customers, as instructions, deep links and graphics can all be utilized to convey an answer. The complexity to which a chatbotanswers questionsdepends on the development of the back end. Chatbots with hard-coded answers has a smaller base on information and corresponding skills. Chatbots that leverage machine learning will continue to grow and develop larger content bases for more complex responses[citation needed].[9]
More frequently, companies are leveraging chatbots as a way to offload simple questions and transactions from human agents.[10]These chatbots provide the option to assist a user, but then directly transfer the customer to a live agent within the same chat window if the conversation becomes too complex, this feature is called Human Handover, chatbot platforms like BotPenguin and other platform offers such features in their chatbots.[11]Chatbots have evolved and have come a long way since their inception. Modern day chatbots havepersonaswhich make them sound more human-like.
|
https://en.wikipedia.org/wiki/Conversational_user_interface
|
Incomputing, anatural user interface(NUI) ornatural interfaceis auser interfacethat is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples includevoice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.[1]
An NUI relies on a user being able to quickly transition from novice to expert. While the interface requires learning, that learning is eased through design which gives the user the feeling that they are instantly and continuously successful. Thus, "natural" refers to a goal in the user experience – that the interaction comes naturally, while interacting with the technology, rather than that the interface itself is natural. This is contrasted with the idea of anintuitive interface, referring to one that can be used without previous learning.
Several design strategies have been proposed which have met this goal to varying degrees of success. One strategy is the use of a "reality user interface" ("RUI"),[2]also known as "reality-based interfaces" (RBI) methods. One example of an RUI strategy is to use awearable computerto render real-world objects "clickable", i.e. so that the wearer can click on any everyday object so as to make it function as a hyperlink, thus merging cyberspace and the real world. Because the term "natural" is evocative of the "natural world", RBI are often confused for NUI, when in fact they are merely one means of achieving it.
One example of a strategy for designing a NUI not based in RBI is the strict limiting of functionality and customization, so that users have very little to learn in the operation of a device. Provided that the default capabilities match the user's goals, the interface is effortless to use. This is an overarching design strategy in Apple's iOS.[citation needed]Because this design is coincident with a direct-touch display, non-designers commonly misattribute the effortlessness of interacting with the device to that multi-touch display, and not to the design of the software where it actually resides.
In the 1990s,Steve Manndeveloped a number of user-interface strategies using natural interaction with the real world as an alternative to acommand-line interface(CLI) orgraphical user interface(GUI). Mann referred to this work as "natural user interfaces", "Direct User Interfaces", and "metaphor-free computing".[3]Mann'sEyeTaptechnology typically embodies an example of a natural user interface. Mann's use of the word "Natural" refers to both action that comes naturally to human users, as well as the use of nature itself, i.e. physics (Natural Philosophy), and the natural environment. A good example of an NUI in both these senses is thehydraulophone, especially when it is used as an input device, in which touching a natural element (water) becomes a way of inputting data. More generally, a class of musical instruments called "physiphones", so-named from the Greek words "physika", "physikos" (nature) and "phone" (sound) have also been proposed as "Nature-based user interfaces".[4]
In 2006, Christian Moore established anopen researchcommunity with the goal to expand discussion and development related to NUI technologies.[5]In a 2008 conference presentation "Predicting the Past," August de los Reyes, a Principal User Experience Director of Surface Computing at Microsoft described the NUI as the next evolutionary phase following the shift from the CLI to the GUI.[6]Of course, this too is an over-simplification, since NUIs necessarily include visual elements – and thus, graphical user interfaces. A more accurate description of this concept would be to describe it as a transition fromWIMPto NUI.
In the CLI, users had to learn an artificial means of input, the keyboard, and a series of codified inputs, that had a limited range of responses, where the syntax of those commands was strict.
Then, when the mouse enabled the GUI, users could more easily learn the mouse movements and actions, and were able to explore the interface much more. The GUI relied on metaphors for interacting with on-screen content or objects. The 'desktop' and 'drag' for example, being metaphors for a visual interface that ultimately was translated back into the strict codified language of the computer.
An example of the misunderstanding of the term NUI was demonstrated at theConsumer Electronics Showin 2010. "Now a new wave of products is poised to bring natural user interfaces, as these methods of controlling electronics devices are called, to an even broader audience."[7]
In 2010, Microsoft's Bill Buxton reiterated the importance of the NUI within Microsoft Corporation with a video discussing technologies which could be used in creating a NUI, and its future potential.[8]
In 2010, Daniel Wigdor and Dennis Wixon provided an operationalization of building natural user interfaces in their book.[9]In it, they carefully distinguish between natural user interfaces, the technologies used to achieve them, and reality-based UI.
WhenBill Buxtonwas asked about the iPhone's interface, he responded "Multi-touch technologies have a long history. To put it in perspective, the original work undertaken by my team was done in 1984, the same year that the first Macintosh computer was released, and we were not the first."[10]
Multi-Touch is a technology which could enable a natural user interface. However, most UI toolkits used to construct interfaces executed with such technology are traditional GUIs.
One example is the work done byJefferson Hanonmulti-touchinterfaces. In a demonstration at TED in 2006, he showed a variety of means of interacting with on-screen content using both direct manipulations and gestures. For example, to shape an on-screen glutinous mass, Jeff literally 'pinches' and prods and pokes it with his fingers. In a GUI interface for a design application for example, a user would use the metaphor of 'tools' to do this, for example, selecting a prod tool, or selecting two parts of the mass that they then wanted to apply a 'pinch' action to. Han showed that user interaction could be much more intuitive by doing away with the interaction devices that we are used to and replacing them with a screen that was capable of detecting a much wider range of human actions and gestures. Of course, this allows only for a very limited set of interactions which map neatly onto physical manipulation (RBI). Extending the capabilities of the software beyond physical actions requires significantly more design work.
Microsoft PixelSensetakes similar ideas on how users interact with content, but adds in the ability for the device to optically recognize objects placed on top of it. In this way, users can trigger actions on the computer through the same gestures and motions as Jeff Han's touchscreen allowed, but also objects become a part of the control mechanisms. So for example, when you place a wine glass on the table, the computer recognizes it as such and displays content associated with that wine glass. Placing a wine glass on a table maps well onto actions taken with wine glasses and other tables, and thus maps well onto reality-based interfaces. Thus, it could be seen as an entrée to a NUI experience.
"3D Immersive Touch" is defined as the direct manipulation of 3D virtual environment objects using single or multi-touch surface hardware in multi-user 3D virtual environments. Coined first in 2007 to describe and define the 3D natural user interface learning principles associated with Edusim. Immersive Touch natural user interface now appears to be taking on a broader focus and meaning with the broader adaption of surface and touch driven hardware such as the iPhone, iPod touch, iPad, and a growing list of other hardware. Apple also seems to be taking a keen interest in “Immersive Touch” 3D natural user interfaces over the past few years. This work builds atop the broad academic base which has studied 3D manipulation in virtual reality environments.
Kinectis amotion sensinginput devicebyMicrosoftfor theXbox 360video game consoleandWindowsPCsthat uses spatialgesturesfor interaction instead of agame controller. According toMicrosoft's page,Kinectis designed for "a revolutionary new way to play: no controller required.".[11]Again, because Kinect allows the sensing of the physical world, it shows potential for RBI designs, and thus potentially also for NUI.
|
https://en.wikipedia.org/wiki/Natural_user_interface
|
Avoice-user interface(VUI) enables spoken human interaction with computers, usingspeech recognitionto understandspoken commandsandanswer questions, and typicallytext to speechto play a reply. Avoice command deviceis a device controlled with a voice user interface.
Voice user interfaces have been added toautomobiles,home automationsystems, computeroperating systems,home applianceslikewashing machinesandmicrowave ovens, and televisionremote controls. They are the primary way of interacting withvirtual assistantsonsmartphonesandsmart speakers. Olderautomated attendants(which route phone calls to the correct extension) andinteractive voice responsesystems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons viaDTMFtones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons.
Newer voice command devices are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriatefeedback, accurately imitating a natural conversation.[1]
A VUI is theinterfaceto any speech application. Only a short time ago, controlling a machine by simply talking to it was only possible inscience fiction. Until recently, this area was considered to beartificial intelligence. However, advances in technologies like text-to-speech, speech-to-text,natural language processing, and cloud services contributed to the mass adoption of these types of interfaces. VUIs have become more commonplace, and people are taking advantage of the value that thesehands-free, eyes-free interfaces provide in many situations.
VUIs need to respond to input reliably, or they will be rejected and often ridiculed by their users. Designing a good VUI requires interdisciplinary talents ofcomputer science,linguisticsand human factorspsychology– all of which are skills that are expensive and hard to come by. Even with advanced development tools, constructing an effective VUI requires an in-depth understanding of both the tasks to be performed, as well as the target audience that will use the final system. The closer the VUI matches the user's mental model of the task, the easier it will be to use with little or no training, resulting in both higher efficiency and higher user satisfaction.
A VUI designed for the general public should emphasize ease of use and provide a lot of help and guidance for first-time callers. In contrast, a VUI designed for a small group ofpower users(including field service workers), should focus more on productivity and less on help and guidance. Such applications should streamline the call flows, minimize prompts, eliminate unnecessary iterations and allow elaborate "mixed initiativedialogs", which enable callers to enter several pieces of information in a single utterance and in any order or combination. In short, speech applications have to be carefully crafted for the specific business process that is being automated.
Not all business processes render themselves equally well for speech automation. In general, the more complex the inquiries and transactions are, the more challenging they will be to automate, and the more likely they will be to fail with the general public. In some scenarios, automation is simply not applicable, so live agent assistance is the only option. A legal advice hotline, for example, would be very difficult to automate. On the flip side, speech is perfect for handling quick and routine transactions, like changing the status of a work order, completing a time or expense entry, or transferring funds between accounts.
Early applications for VUI included voice-activateddialingof phones, either directly or through a (typicallyBluetooth) headset or vehicle audio system.
In 2007, aCNNbusiness article reported that voice command was over a billion dollar industry and that companies like Google andApplewere trying to create speech recognition features.[2]In the years since the article was published, the world has witnessed a variety of voice command devices. Additionally, Google has created a speech recognition engine called Pico TTS and Apple released Siri. Voice command devices are becoming more widely available, and innovative ways for using the human voice are always being created. For example, Business Week suggests that the future remote controller is going to be the human voice. CurrentlyXbox Liveallows such features andJobshinted at such a feature on the newApple TV.[3]
Both AppleMacandWindowsPCprovide built in speech recognition features for their latestoperating systems.
Two Microsoft operating systems,Windows 7andWindows Vista, provide speech recognition capabilities. Microsoft integrated voice commands into their operating systems to provide a mechanism for people who want to limit their use of the mouse and keyboard, but still want to maintain or increase their overall productivity.[4]
With Windows Vista voice control, a user may dictate documents and emails in mainstream applications, start and switch between applications, control the operating system, format documents, save documents, edit files, efficiently correct errors, and fill out forms on theWeb. The speech recognition software learns automatically every time a user uses it, and speech recognition is available in English (U.S.), English (U.K.), German (Germany), French (France), Spanish (Spain), Japanese, Chinese (Traditional), and Chinese (Simplified). In addition, the software comes with an interactive tutorial, which can be used to train both the user and the speech recognition engine.[5]
In addition to all the features provided in Windows Vista, Windows 7 provides a wizard for setting up the microphone and a tutorial on how to use the feature.[6]
AllMac OS Xcomputers come pre-installed with the speech recognition software. The software is user-independent, and it allows for a user to, "navigate menus and enter keyboard shortcuts; speak checkbox names, radio button names, list items, and button names; and open, close, control, and switch among applications."[7]However, the Apple website recommends a user buy a commercial product calledDictate.[7]
If a user is not satisfied with the built in speech recognition software or a user does not have a built speech recognition software for their OS, then a user may experiment with a commercial product such asBraina ProorDragonNaturallySpeakingfor Windows PCs,[8]and Dictate, the name of the same software for Mac OS.[9]
Any mobile device running Android OS, Microsoft Windows Phone, iOS 9 or later, or Blackberry OS provides voice command capabilities. In addition to the built-in speech recognition software for each mobile phone's operating system, a user may download third party voice command applications from each operating system's application store:Apple App store,Google Play,Windows Phone Marketplace(initiallyWindows Marketplace for Mobile), orBlackBerry App World.
Google has developed an open source operating system calledAndroid, which allows a user to perform voice commands such as: send text messages, listen to music, get directions, call businesses, call contacts, send email, view a map, go to websites, write a note, and search Google.[10]The speech recognition software is available for all devices sinceAndroid 2.2 "Froyo", but the settings must be set to English.[10]Google allows for the user to change the language, and the user is prompted when he or she first uses the speech recognition feature if he or she would like their voice data to be attached to their Google account. If a user decides to opt into this service, it allows Google to train the software to the user's voice.[11]
Google introduced theGoogle AssistantwithAndroid 7.0 "Nougat". It is much more advanced than the older version.
Amazon.comhas theEchothat uses Amazon's custom version of Android to provide a voice interface.
Windows PhoneisMicrosoft's mobile device's operating system. On Windows Phone 7.5, the speech app is user independent and can be used to: call someone from your contact list, call any phone number, redial the last number, send a text message, call your voice mail, open an application, read appointments, query phone status, and search the web.[12][13]In addition, speech can also be used during a phone call, and the following actions are possible during a phone call: press a number, turn the speaker phone on, or call someone, which puts the current call on hold.[13]
Windows 10 introducesCortana, a voice control system that replaces the formerly used voice control on Windows phones.
Apple added Voice Control to itsfamily of iOS devicesas a new feature ofiPhone OS 3. TheiPhone 4S,iPad 3,iPad Mini 1G,iPad Air,iPad Pro 1G,iPod Touch 5Gand later, all come with a more advanced voice assistant calledSiri. Voice Control can still be enabled through the Settings menu of newer devices. Siri is a user independent built-in speech recognition feature that allows a user to issue voice commands. With the assistance of Siri a user may issue commands like, send a text message, check the weather, set a reminder, find information, schedule meetings, send an email, find a contact, set an alarm, get directions, track your stocks, set a timer, and ask for examples of sample voice command queries.[14]In addition, Siri works withBluetoothand wired headphones.[15]
Apple introduced Personal Voice as an accessibility feature iniOS 17, launched on September 18, 2023.[16]This feature allows users to create a personalized, machine learning-generated (AI) version of their voice for use intext-to-speechapplications. Designed particularly for individuals withspeech impairments, Personal Voice helps preserve the unique sound of a user's voice. It enhancesSiriand other accessibility tools by providing a more personalized and inclusiveuser experience. Personal Voice reflects Apple's ongoing commitment toaccessibilityandinnovation.[17][18]
In 2014 Amazon introduced theAlexa smart home device. Its main purpose was just a smart speaker, that allowed the consumer to control the device with their voice. Eventually, it turned into a novelty device that had the ability to control home appliance with voice. Now almost all the appliances are controllable with Alexa, including light bulbs and temperature. By allowing voice control, Alexa can connect to smart home technology allowing you to lock your house, control the temperature, and activate various devices. This form of A.I allows for someone to simply ask it a question, and in response the Alexa searches for, finds, and recites the answer back to you.[19]
As car technology improves, more features will be added to cars and these features could potentially distract a driver. Voice commands for cars, according toCNET, should allow a driver to issue commands and not be distracted. CNET stated that Nuance was suggesting that in the future they would create a software that resembled Siri, but for cars.[20]Most speech recognition software on the market in 2011 had only about 50 to 60 voice commands, but Ford Sync had 10,000.[20]However, CNET suggested that even 10,000 voice commands was not sufficient given the complexity and the variety of tasks a user may want to do while driving.[20]Voice command for cars is different from voice command for mobile phones and for computers because a driver may use the feature to look for nearby restaurants, look for gas, driving directions, road conditions, and the location of the nearest hotel.[20]Currently, technology allows a driver to issue voice commands on both a portableGPSlike aGarminand a car manufacturer navigation system.[21]
List of Voice Command Systems Provided By Motor Manufacturers:
While most voice user interfaces are designed to support interaction through spoken human language, there have also been recent explorations in designing interfaces take non-verbal human sounds as input.[22][23]In these systems, the user controls the interface by emitting non-speech sounds such as humming, whistling, or blowing into a microphone.[24]
One such example of a non-verbal voice user interface is Blendie,[25][26]an interactive art installation created by Kelly Dobson. The piece comprised a classic 1950s-era blender which was retrofitted to respond to microphone input. To control the blender, the user must mimic the whirring mechanical sounds that a blender typically makes: the blender will spin slowly in response to a user's low-pitched growl, and increase in speed as the user makes higher-pitched vocal sounds.
Another example is VoiceDraw,[27]a research system that enables digital drawing for individuals with limited motor abilities. VoiceDraw allows users to "paint" strokes on a digital canvas by modulating vowel sounds, which are mapped to brush directions. Modulating other paralinguistic features (e.g. the loudness of their voice) allows the user to control different features of the drawing, such as the thickness of the brush stroke.
Other approaches include adopting non-verbal sounds to augment touch-based interfaces (e.g. on a mobile phone) to support new types of gestures that wouldn't be possible with finger input alone.[24]
Voice interfaces pose a substantial number of challenges for usability. In contrast to graphical user interfaces (GUIs), best practices for voice interface design are still emergent.[28]
With purely audio-based interaction, voice user interfaces tend to suffer from lowdiscoverability:[28]it is difficult for users to understand the scope of a system's capabilities. In order for the system to convey what is possible without a visual display, it would need to enumerate the available options, which can become tedious or infeasible. Low discoverability often results in users reporting confusion over what they are "allowed" to say, or a mismatch in expectations about the breadth of a system's understanding.[29][30]
Whilespeech recognitiontechnology has improved considerably in recent years, voice user interfaces still suffer from parsing or transcription errors in which a user's speech is not interpreted correctly.[31]These errors tend to be especially prevalent when the speech content uses technical vocabulary (e.g. medical terminology) or unconventional spellings such as musical artist or song names.[32]
Effective system design to maximizeconversational understandingremains an open area of research. Voice user interfaces that interpret and manage conversational state are challenging to design due to the inherent difficulty of integrating complex natural language processing tasks likecoreference resolution,named-entity recognition,information retrieval, anddialog management.[33]Most voice assistants today are capable of executing single commands very well but limited in their ability to manage dialogue beyond a narrow task or a couple turns in a conversation.[34]
Privacy concerns are raised by the fact that voice commands are available to the providers of voice-user interfaces in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner.[35][36]Additionally to the linguistic content of recorded speech, a user's manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender,moods and emotions, socioeconomic status and geographical origin.[37]
|
https://en.wikipedia.org/wiki/Voice_user_interface
|
Noisy textis text with differences between the surface form of a coded representation of thetextand the intended, correct, or original text.[1]Thenoisemay be due totypographic errorsorcolloquialismsalways present innatural languageand usually lowers thedata qualityin a way that makes the text less accessible to automated processing by computers, includingnatural language processing. The noise may also have been introduced through an extraction process (e.g.,transcriptionorOCR) from media other than originalelectronic texts.[2]
Language usage over computer mediated discourses, likechats,emailsandSMStexts, significantly differs from the standard form of the language. An urge towards shorter message length facilitatingfaster typingand the need forsemanticclarity, shape the structure of this text used in such discourses.
Various business analysts estimate thatunstructured dataconstitutes around 80% of the wholeenterprise data. A great proportion of this data comprises chat transcripts, emails and other informal and semi-formal internal and external communications. Usually such text is meant for human consumption, but—given the amount of data—manual processing and evaluation of those resources is not practically feasible anymore. This raises the need for robusttext miningmethods.[3]
The use ofspell checkersandgrammar checkerscan reduce the amount of noise in typed text. Manyword processorsinclude this in the editing tool. Online,Google Searchincludes a search term suggestion engine to guide users when they make mistakes with their queries.
|
https://en.wikipedia.org/wiki/Noisy_text
|
Semantic searchdenotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query.[1]Semantic search seeks to improvesearchaccuracy by understandingthe searcher's intentand thecontextualmeaning of terms as they appear in the searchable dataspace, whether on theWebor within a closed system, to generate more relevant results.
Some authors regard semantic search as a set of techniques for retrieving knowledge from richly structured data sources likeontologiesandXMLas found on theSemantic Web.[2]Such technologies enable the formal articulation ofdomain knowledgeat a high level of expressiveness and could enable the user to specify their intent in more detail at query time.[3]The articulation enhances content relevance and depth by including specific places, people, or concepts relevant to the query.[4]
This Internet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Semantic_search
|
Semantic queriesallow for queries and analytics of associative andcontextualnature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based onsyntactic,semanticandstructural informationcontained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer morefuzzyand wide open questions throughpattern matchinganddigital reasoning.
Semantic queries work onnamed graphs,linked dataortriples. This enables the query to process the actualrelationshipsbetween information andinferthe answers from thenetwork of data. This is in contrast tosemantic search, which usessemantics(meaning of language constructs) inunstructured textto produce a better search result. (Seenatural language processing.)
From a technical point of view, semantic queries are precise relational-type operations much like adatabase query. They work on structured data and therefore have the possibility to utilize comprehensive features like operators (e.g. >, < and =), namespaces,pattern matching,subclassing,transitive relations,semantic rulesand contextualfull text search. Thesemantic webtechnology stack of theW3Cis offeringSPARQL[1][2]to formulate semantic queries in a syntax similar toSQL. Semantic queries are used intriplestores,graph databases,semantic wikis, natural language andartificial intelligencesystems.
Relational databasesrepresent all relationships between data in animplicitmanner only.[3][4]For example, the relationships between customers and products (stored in two content-tables and connected with an additional link-table) only come into existence in a query statement (SQLin the case of relational databases) written by a developer. Writing the query demands exact knowledge of thedatabase schema.[5][6]
Linked-Datarepresent all relationships between data in anexplicitmanner. In the above example, no query code needs to be written. The correct product for each customer can be fetched automatically. Whereas this simple example is trivial, the real power of linked-data comes into play when anetwork of informationis created (customers with their geo-spatial information like city, state and country; products with their categories within sub- and super-categories). Now the system can automatically answer more complex queries and analytics that look for the connection of a particular location with a product category. The development effort for this query is omitted. Executing a semantic query is conducted bywalkingthe network of information and finding matches (also calledData Graph Traversal).
Another important aspect of semantic queries is that the type of the relationship can be used to incorporate intelligence into the system. The relationship between a customer and a product has a fundamentally different nature than the relationship between a neighbourhood and its city. The latter enables the semantic query engine toinferthat a customerliving in Manhattan is also living in New York Citywhereas other relationships might have more complicated patterns and "contextual analytics". This process is called inference or reasoning and is the ability of the software to derive new information based on given facts.
|
https://en.wikipedia.org/wiki/Semantic_query
|
IBM Watsonis a computer system capable ofanswering questionsposed innatural language.[1]It was developed as a part ofIBM's DeepQA project by a research team, led byprincipal investigatorDavid Ferrucci.[2]Watson was named after IBM's founder and first CEO, industrialistThomas J. Watson.[3][4]
The computer system was initially developed to answer questions on the popular quiz showJeopardy![5]and in 2011, the Watson computer system competed onJeopardy!against championsBrad RutterandKen Jennings,[3][6]winning the first-place prize of US$1 million.[7]
In February 2013, IBM announced that Watson's first commercial application would be forutilization managementdecisions in lung cancer treatment, atMemorial Sloan Kettering Cancer Center, New York City, in conjunction with WellPoint (nowElevance Health).[8]
Watson was created as aquestion answering(QA) computing system that IBM built to apply advancednatural language processing,information retrieval,knowledge representation,automated reasoning, andmachine learningtechnologies to the field ofopen domain question answering. The system is named DeepQA (though it did not involve the use ofdeep neural networks).[1]
IBM stated that Watson uses "more than 100 different techniques to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses."[10]
In recent years, Watson's capabilities have been extended and the way in which Watson works has been changed to take advantage of new deployment models (Watson onIBM Cloud), evolved machine learning capabilities, and optimized hardware available to developers and researchers.[citation needed]
Watson uses IBM's DeepQA software and the ApacheUIMA(Unstructured Information Management Architecture) framework implementation. The system was written in various languages, includingJava,C++, andProlog, and runs on theSUSE Linux Enterprise Server11 operating system using the ApacheHadoopframework to provide distributed computing.[11][12][13]
Other than the DeepQA system, Watson contained several strategy modules. For example, one module calculated the amount to bet forFinal Jeopardy, according to the confidence score on getting the answer right, and the current scores of all contestants. One module used theBayes ruleto calculate the probability that each unrevealed question might be theDaily Double, using historical data from the J! Archive as theprior. If a Daily Double is found, the amount to wager is computed by a 2-layered neural network of the same kind as those used byTD-Gammon, a neural network that played backgammon, developed byGerald Tesauroin the 1990s.[14]The parameters in the strategy modules were tuned by benchmarking against a statistical model of human contestants fitted on data from the J! Archive, and selecting the best one.[15][16][17]
The system is workload-optimized, integratingmassively parallelPOWER7processors and built on IBM'sDeepQAtechnology,[18]which it uses to generate hypotheses, gather massive evidence, and analyze data.[1]Watson employs a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHzPOWER7eight-core processor, with four threads per core. In total, the system uses 2,880 POWER7 processor threads and 16terabytesof RAM.[18]
According toJohn Rennie, Watson can process 500 gigabytes (the equivalent of a million books) per second.[19]IBM master inventor and senior consultant Tony Pearson estimated Watson's hardware cost at about three million dollars.[20]ItsLinpackperformance stands at 80TeraFLOPs, which is about half as fast as the cut-off line for theTop 500 Supercomputerslist.[21]According to Rennie, all content was stored in Watson's RAM for the Jeopardy game because data stored onhard driveswould be too slow to compete with human Jeopardy champions.[19]
The sources of information for Watson include encyclopedias,dictionaries,thesauri,newswirearticles andliterary works. Watson also used databases,taxonomiesandontologiesincludingDBPedia,WordNetandYago.[22]The IBM team provided Watson with millions of documents, including dictionaries, encyclopedias and other reference material, that it could use to build its knowledge.[23]
Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[23]Watson's main innovation was not in the creation of a newalgorithmfor this operation, but rather its ability to quickly execute hundreds of provenlanguage analysisalgorithms simultaneously.[23][24]The more algorithms that find the same answer independently, the more likely Watson is to be correct. Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[23]
Watson's basic working principle is to parse keywords in a clue while searching for related terms as responses. This gives Watson some advantages and disadvantages compared with humanJeopardy!players.[25]Watson has deficiencies inunderstandingthe context of the clues. Watson can read, analyze, and learn from natural language, which gives it the ability to make human-like decisions.[26]As a result, human players usually generate responses faster than Watson, especially to short clues.[23]Watson's programming prevents it from using the popular tactic of buzzing before it is sure of its response.[23]However, Watson has consistently betterreaction timeon the buzzer once it has generated a response, and is immune to human players' psychological tactics, such as jumping between categories on every clue.[23][27]
In a sequence of 20 mock games ofJeopardy!, human participants were able to use the six to seven seconds that Watson needed to hear the clue and decide whether to signal for responding.[23]During that time, Watson also has to evaluate the response and determine whether it is sufficiently confident in the result to signal.[23]Part of the system used to win theJeopardy!contest was the electronic circuitry that receives the "ready" signal and then examines whether Watson's confidence level was great enough to activate the buzzer. Given the speed of this circuitry compared to the speed of human reaction times, Watson's reaction time was faster than the human contestants except when the human anticipated (instead of reacted to) the ready signal.[28]After signaling, Watson speaks with an electronic voice and gives the responses inJeopardy!'squestion format.[23]Watson's voice was synthesized from recordings that actor Jeff Woodman made for an IBMtext-to-speechprogram in 2004.[29]
TheJeopardy!staff used different means to notify Watson and the human players when to buzz,[28]which was critical in many rounds.[27]The humans were notified by a light, which took them tenths of a second toperceive.[30][31]Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds.[32]The humans tried to compensate for the perception delay by anticipating the light,[33]but the variation in the anticipation time was generally too great to fall within Watson's response time.[27]Watson did not attempt to anticipate the notification signal.[31][33]
SinceDeep Blue's victory overGarry Kasparovin chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening's hiatus:Ken Jennings, who was then in the middle of his successful 74-game run onJeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watchJeopardy!. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executivePaul Hornsupported Lickel, pushing for someone in his department to take up the challenge of playingJeopardy!with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer.[34]In competitions managed by the United States government, Watson's predecessor, a system named Piquant, was usually able to respond correctly to only about 35% of clues and often required several minutes to respond.[35][36][37]To compete successfully onJeopardy!, Watson would need to respond in no more than a few seconds, and at that time, the problems posed by the game show were deemed to be impossible to solve.[23]
In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from pastJeopardy!programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems.[23]John E. Kelly IIIsucceeded Paul Horn as head ofIBM Researchin 2007.[38]InformationWeekdescribed Kelly as "the father of Watson" and credited him for encouraging the system to compete against humans onJeopardy!.[39]By 2008, the developers had advanced Watson such that it could compete withJeopardy!champions.[23]By February 2010, Watson could beat humanJeopardy!contestants on a regular basis.[40]
During the game, Watson had access to 200 million pages of structured and unstructured content consuming fourterabytesofdisk storage[11]including the full text of the 2011 edition of Wikipedia,[41]but was not connected to the Internet.[42][23]For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.[citation needed]
Although the system is primarily an IBM effort, Watson's development involved faculty and graduate students fromRensselaer Polytechnic Institute,Carnegie Mellon University,University of Massachusetts Amherst, theUniversity of Southern California'sInformation Sciences Institute, theUniversity of Texas at Austin, theMassachusetts Institute of Technology, and theUniversity of Trento,[9]as well as students fromNew York Medical College.[43]Among the team of IBM programmers who worked on Watson was 2001Who Wants to Be a Millionaire?top prize winner Ed Toutant, who himself had appeared onJeopardy!in 1989 (winning one game).[44]
In 2008, IBM representatives communicated withJeopardy!executive producerHarry Friedmanabout the possibility of having Watson compete againstKen JenningsandBrad Rutter, two of the most successful contestants on the show, and the program's producers agreed.[23][45]Watson's differences with human players had generated conflicts between IBM andJeopardy!staff during the planning of the competition.[25]IBM repeatedly expressed concerns that the show's writers would exploit Watson's cognitive deficiencies when writing the clues, thereby turning the game into aTuring test. To alleviate that claim, a third party randomly picked the clues from previously written shows that were never broadcast.[25]Jeopardy!staff also showed concerns over Watson's reaction time on the buzzer. Originally Watson signaled electronically, but show staff requested that it press a button physically, as the human contestants would.[46]Even with a robotic "finger" pressing the buzzer, Watson remained faster than its human competitors. Ken Jennings noted, "If you're trying to win on the show, the buzzer is all", and that Watson "can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard."[27][33][47]Stephen Baker, a journalist who recorded Watson's development in his bookFinal Jeopardy, reported that the conflict between IBM andJeopardy!became so serious in May 2010 that the competition was almost cancelled.[25]As part of the preparation, IBM constructed a mock set in a conference room at one of its technology sites to model the one used onJeopardy!. Human players, including formerJeopardy!contestants, also participated in mock games against Watson with Todd Alan Crain ofThe Onionplaying host.[23]About 100 test matches were conducted with Watson winning 65% of the games.[48]
To provide a physical presence in the televised games, Watson was represented by an "avatar" of a globe, inspired by the IBM "smarter planet" symbol. Jennings described the computer's avatar as a "glowing blue ball crisscrossed by 'threads' of thought—42 threads, to be precise",[49]and stated that the number of thought threads in the avatar was anin-jokereferencing thesignificanceof thenumber 42inDouglas Adams'Hitchhiker's Guide to the Galaxy.[49]Joshua Davis, the artist who designed the avatar for the project, explained to Stephen Baker that there are 36 triggerable states that Watson was able to use throughout the game to show its confidence in responding to a clue correctly; he had hoped to be able to find forty-two, to add another level to theHitchhiker's Guidereference, but he was unable to pinpoint enough game states.[50]
A practice match was recorded on January 13, 2011, and the official matches were recorded on January 14, 2011. All participants maintained secrecy about the outcome until the match was broadcast in February.[51]
In a practice match before the press on January 13, 2011, Watson won a 15-question round against Ken Jennings and Brad Rutter with a score of $4,400 to Jennings's $3,400 and Rutter's $1,200, though Jennings and Watson were tied before the final $1,000 question. None of the three players responded incorrectly to a clue.[52]
The first round was broadcast February 14, 2011, and the second round, on February 15, 2011. The right to choose the first category had been determined by a draw won by Rutter.[53]Watson, represented by a computer monitor display and artificial voice, responded correctly to the second clue and then selected the fourth clue of the first category, a deliberate strategy to find the Daily Double as quickly as possible.[54]Watson's guess at the Daily Double location was correct. At the end of the first round, Watson was tied with Rutter at $5,000; Jennings had $2,000.[53]
Watson's performance was characterized by some quirks. In one instance, Watson repeated a reworded version of an incorrect response offered by Jennings. (Jennings said "What are the '20s?" in reference to the 1920s. Then Watson said "What is 1920s?") Because Watson could not recognize other contestants' responses, it did not know that Jennings had already given the same response. In another instance, Watson was initially given credit for a response of "What is a leg?" after Jennings incorrectly responded "What is: he only had one hand?" to a clue aboutGeorge Eyser(the correct response was, "What is: he's missing a leg?"). Because Watson, unlike a human, could not have been responding to Jennings's mistake, it was decided that this response was incorrect. The broadcast version of the episode was edited to omit Trebek's original acceptance of Watson's response.[55]Watson also demonstrated complex wagering strategies on the Daily Doubles, with one bet at $6,435 and another at $1,246.[56]Gerald Tesauro, one of the IBM researchers who worked on Watson, explained that Watson's wagers were based on its confidence level for the category and a complexregression modelcalled the Game State Evaluator.[17]
Watson took a commanding lead in Double Jeopardy!, correctly responding to both Daily Doubles. Watson responded to the second Daily Double correctly with a 32% confidence score.[56]
However, during the Final Jeopardy! round, Watson was the only contestant to miss the clue in the category U.S. Cities ("Itslargest airportwas named for aWorld War II hero; itssecond largest, for aWorld War II battle"). Rutter and Jennings gave the correct response of Chicago, but Watson's response was "What isToronto?????" with five question marks appended indicating a lack of confidence.[56][57][58]Ferrucci offered reasons why Watson would appear to have guessed a Canadian city: categories only weakly suggest the type of response desired, the phrase "U.S. city" did not appear in the question, there arecities named Toronto in the U.S., and Toronto in Ontario has anAmerican Leaguebaseball team.[59]Chris Welty, who also worked on Watson, suggested that it may not have been able to correctly parse the second part of the clue, "its second largest, for a World War II battle" (which was not a standalone clause despite it following asemicolon, and required context to understand that it was referring to a second-largestairport).[60]Eric Nyberg, a professor at Carnegie Mellon University and a member of the development team, stated that the error occurred because Watson does not possess the comparative knowledge to discard that potential response as not viable.[58]Although not displayed to the audience as with non-Final Jeopardy! questions, Watson's second choice was Chicago. Both Toronto and Chicago were well below Watson's confidence threshold, at 14% and 11% respectively. Watson wagered only $947 on the question.[61]
The game ended with Jennings with $4,800, Rutter with $10,400, and Watson with $35,734.[56]
During the introduction, Trebek (a Canadian native) joked that he had learned Toronto was a U.S. city, and Watson's error in the first match prompted an IBM engineer to wear aToronto Blue Jaysjacket to the recording of the second match.[62]
In the first round, Jennings was finally able to choose a Daily Double clue,[63]while Watson responded to one Daily Double clue incorrectly for the first time in the Double Jeopardy! Round.[64]After the first round, Watson placed second for the first time in the competition after Rutter and Jennings were briefly successful in increasing their dollar values before Watson could respond.[64][65]Nonetheless, the final result ended with a victory for Watson with a score of $77,147, besting Jennings who scored $24,000 and Rutter who scored $21,600.[66]
The prizes for the competition were $1 million for first place (Watson), $300,000 for second place (Jennings), and $200,000 for third place (Rutter). As promised, IBM donated 100% of Watson's winnings to charity, with 50% of those winnings going toWorld Visionand 50% going toWorld Community Grid.[67]Similarly, Jennings and Rutter donated 50% of their winnings to their respective charities.[68]
In acknowledgement of IBM and Watson's achievements, Jennings made an additional remark in his Final Jeopardy! response: "I for one welcome our new computer overlords", paraphrasinga jokefromThe Simpsons.[69][70]Jennings later wrote an article forSlate, in which he stated:
IBM has bragged to the media that Watson's question-answering skills are good for more than annoying Alex Trebek. The company sees a future in which fields likemedical diagnosis,business analytics, andtech supportare automated by question-answering software like Watson. Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the firstknowledge-industry workersput out of work by the new generation of 'thinking' machines. 'Quiz show contestant' may be the first job made redundant by Watson, but I'm sure it won't be the last.[49]
PhilosopherJohn Searleargues that Watson—despite impressive capabilities—cannot actually think.[71]Drawing on hisChinese roomthought experiment, Searle claims that Watson, like other computational machines, is capable only of manipulating symbols, but has no ability to understand the meaning of those symbols; however, Searle's experiment has itsdetractors.[72]
On February 28, 2011, Watson played an untelevised exhibition match ofJeopardy!against members of theUnited States House of Representatives. In the first round,Rush D. Holt, Jr.(D-NJ, a formerJeopardy!contestant), who was challenging the computer withBill Cassidy(R-LA, later Senator from Louisiana), led with Watson in second place. However, combining the scores between all matches, the final score was $40,300 for Watson and $30,000 for the congressional players combined.[73]
IBM's Christopher Padilla said of the match, "The technology behind Watson represents a major advancement in computing. In the data-intensive environment of government, this type of technology can help organizations make better decisions and improve how government helps its citizens."[73]
After the national press attention gained by the 2011Jeopardy!appearance, IBM sought out partnerships from education to weather and cancer to retail chatbots in order convince business about Watson's alleged capabilities. This ultimately led to the failure of Watson to find a profit-making product for the company.[74]
In 2011, the IBMgeneral counselwrote inThe National Law Reviewarguing that the law profession will become more efficient and better with Watson.[75]After the national attentionJeopardy!afforded them, began an ultimately unsuccessful and expensive project that began when theMemorial Sloan Kettering Cancer Centertried to use Watson to help doctors diagnose and treat cancer patients. Ultimately, the division cost $4 billion to develop but was sold for a quarter of that—$1 billion, in 2022.[76]By 2023, Watson resulted in IBM losing 10% of its stock value, costing four times more than what it brought to the company and resulting in mass layoffs.[74]
From 2012 through the late 2010s, Watson's technology was used to create applications—mostly discontinued[77]to help people make decisions in a variety of areas, among them:
In 2021, technology reporter atThe New York TimesforSteve Rohr, explained:
The company’s missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article. Several of those people asked not to be named because they had not been authorized to speak or still had business ties to IBM.
Writing inThe Atlanticin 2023, Mac Schwerin argued that IBM's leadership fundamentally did not understand the technology, leading to the hardship and strain caused by the project, saying:
But the suits in charge went after the bigger and more technically challenging game of feeding the machine entirely different types of material. They viewed Watson as a generational meal ticket.
In the end, IBM's initial vision for Watson as a transformative technology capable of revolutionizing industries did not materialize as anticipated.[91]Watson's capabilities were primarily suited to specific tasks, like natural language processing for trivia games, rather than generalized commercial problem-solving.[92]Watson's mismatch between capabilities and IBM's marketing contributed significantly to Watson's commercial struggles and eventual decline. The overstated claims about Watson's abilities also caused public sentiment to turn against the idea of Watson and artificial intelligence.[77]
Between 2019 and 2023, IBM shifted focus to a separate initiative WatsonX, distinctly different from Watson, aiming for narrower, industry-targeted technology within IBM's cloud computing and platform-based strategiesIBM Watsonx.[77][74]
IBM's Watson was used to analyze medical datasets to provide physicians with guidance on diagnoses and cancer treatment decisions.[93][94]When a physician submitted a query to Watson, the system started a multi-step process by parsing the input to identify key information, examining patient data to uncover relevant medical and hereditary history, and finally compare various data sources to form and test hypotheses.[95][94]
IBM claimed that Watson's could draw from a wide range of sources, including treatment guidelines,electronic medical records, and research materials.[94]Although, company executives would later blame the lack of data on the projects ultimate failure.[76]
Notably, Watson has not been involved in the actual diagnosis process, but rather assists doctors in identifying suitable treatment options for patients who have already been diagnosed.[96]In fact, a study of 1,000 challenging patient cases found that Watson's recommendations matched those of human doctors in an impressive 99% of cases.[97]
IBM established partnerships with theCleveland Clinic,[98]theMD Anderson Cancer Center, andMemorial Sloan-Kettering Cancer Centerto further its mission in healthcare. In 2011, IBM entered into a research partnership withNuance Communicationsand physicians at theUniversity of MarylandandHarvardto develop a commercial product using Watson'sclinical decision supportcapabilities. IBM partnered withWellPoint(nowAnthem) in 2011 to utilize Watson in suggesting treatment options to physicians,[99]and in 2013, Watson was deployed in its first commercial application for utilization management decisions in lung cancer treatment at Memorial Sloan-Kettering Cancer Center.[8]The Cleveland Clinic collaboration aimed to enhance Watson's health expertise and support medical professionals in treating patients more effectively. However, the MD Anderson Cancer Center pilot program, initiated in 2013, ultimately failed to meet its goals and was discontinued after $65 million in investment.[100][101][98]
In 2016, IBM launched "IBM Watson for Oncology," a product designed to provide personalized, evidence-based cancer care options to physicians and patients.[91]This initiative marked a significant milestone in the adoption of Watson's technology in the healthcare industry. Additionally, IBM partnered withManipal Hospitalsin India to offer Watson's expertise to patients online.[102][103]
The company ultimately faced challenges in the healthcare market, with no profit and increased competition.[91]In 2022, IBM announced the sell-off of its Watson Health unit to Francisco Partners, marking a significant shift in the company's approach to the healthcare industry.[91][76]
On January 9, 2014, IBM announced it was creating a business unit around Watson.[104]IBM Watson Group will have headquarters in New York City'sSilicon Alleyand will employ 2,000 people. IBM has invested $1 billion to get the division going. Watson Group will develop three newcloud-delivered services: Watson Discovery Advisor, Watson Engagement Advisor, and Watson Explorer. Watson Discovery Advisor will focus onresearch and developmentprojects inpharmaceutical industry, publishing, andbiotechnology, Watson Engagement Advisor will focus on self-service applications using insights on the basis ofnatural languagequestions posed by business users, and Watson Explorer will focus on helping enterprise users uncover and share data-driven insights based on federated search more easily.[104]The company is also launching a $100 million venture fund to spur application development for "cognitive" applications. According to IBM, the cloud-delivered enterprise-ready Watson has seen its speed increase 24 times over—a 2,300 percent improvement in performance and its physical size shrank by 90 percent—from the size of a master bedroom to three stacked pizza boxes.[104]IBM CEOVirginia Romettysaid she wants Watson to generate $10 billion in annual revenue within ten years.[105]In 2017, IBM andMITestablished a new joint research venture in artificial intelligence. IBM invested $240 million to create the MIT–IBM Watson AI Lab in partnership with MIT, which brings together researchers in academia and industry to advance AI research, with projects ranging from computer vision and NLP to devising new ways to ensure that AI systems are fair, reliable and secure.[106]In March 2018, IBM's CEOGinni Romettyproposed "Watson's Law," the "use of and application of business, smart cities, consumer applications and life in general."[107]
|
https://en.wikipedia.org/wiki/Watson_(computer)
|
Compound-term processing,ininformation-retrieval, is search result matching on the basis ofcompound terms. Compound terms are built by combining two or more simple terms; for example, "triple" is a single word term, but "triple heart bypass" is a compound term.
Compound-term processing is a new approach to an old problem: how can one improve the relevance of search results while maintaining ease of use? Using this technique, a search forsurvival rates following a triple heart bypass in elderly peoplewill locate documents about this topic even if this precise phrase is not contained in any document. This can be performed by aconcept search, which itself uses compound-term processing. This will extract the key concepts automatically (in this case "survival rates", "triple heart bypass" and "elderly people") and use these concepts to select the most relevant documents.
In August 2003,Concept Searching Limitedintroduced the idea of using statistical compound-term processing.[1]
CLAMOUR is a European collaborative project which aims to find a better way to classify when collecting and disseminating industrial information and statistics. CLAMOUR appears to use a linguistic approach, rather than one based onstatistical modelling.[2]
Techniques for probabilistic weighting of single word terms date back to at least 1976 in the landmark publication byStephen E. RobertsonandKaren Spärck Jones.[3]Robertson stated that the assumption of word independence is not justified and exists as a matter of mathematical convenience. His objection to the term independence is not a new idea, dating back to at least 1964 when H. H. Williams stated that "[t]he assumption of independence of words in a document is usually made as a matter of mathematical convenience".[4]
In 2004, Anna Lynn Patterson filed patents on "phrase-based searching in an information retrieval system"[5]to whichGooglesubsequently acquired the rights.[6]
Statistical compound-term processing is more adaptable than the process described by Patterson. Her process is targeted at searching theWorld Wide Webwhere an extensive statistical knowledge of common searches can be used to identify candidate phrases. Statistical compound term processing is more suited toenterprise searchapplications where sucha prioriknowledge is not available.
Statistical compound-term processing is also more adaptable than the linguistic approach taken by the CLAMOUR project, which must consider the syntactic properties of the terms (i.e. part of speech, gender, number, etc.) and their combinations. CLAMOUR is highly language-dependent, whereas the statistical approach is language-independent.
Compound-term processing allows information-retrieval applications, such assearch engines, to perform their matching on the basis of multi-word concepts, rather than on single words in isolation which can be highly ambiguous.
Early search engines looked for documents containing the words entered by the user into the search box . These are known askeyword searchengines.Boolean searchengines add a degree of sophistication by allowing the user to specify additional requirements. For example, "Tiger NEAR Woods AND (golf OR golfing) NOT Volkswagen" uses the operators "NEAR", "AND", "OR" and "NOT" to specify that these words must follow certain requirements. Aphrase searchis simpler to use, but requires that the exact phrase specified appear in the results.
|
https://en.wikipedia.org/wiki/Compound-term_processing
|
Aforeign language writing aidis acomputer programor any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Assisted aspects of writing include:lexical,syntactic(syntactic and semantic roles of a word's frame),lexical semantic(context/collocation-influenced word choice and user-intention-drivensynonymchoice) andidiomaticexpression transfer, etc. Different types offoreign languagewriting aids include automated proofreading applications,text corpora,dictionaries,translationaids andorthographyaids.
The four major components in the acquisition of a language are namely;listening,speaking,readingandwriting.[1]While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.[2]
Many language learners experienceforeign language anxiety, feelings of apprehensiveness and nervousness, when learning a second language.[1]In the case of writing in a foreign language, this anxiety can be alleviated via foreign language writing aids as they assist non-native language users in independently producing decent written work at their own pace, hence increasing confidence about themselves and their own learning abilities.[3]
With advancements in technology, aids in foreign language writing are no longer restricted to traditional mediums such as teacher feedback and dictionaries. Known ascomputer-assisted language learning(CALL), use of computers in language classrooms has become more common, and one example would be the use ofword processorsto assist learners of a foreign language in the technical aspects of their writing, such asgrammar.[4]In comparison with correction feedback from the teacher, the use of word processors is found to be a better tool in improving the writing skills of students who are learningEnglish as a foreign language(EFL), possibly because students find it more encouraging to learn their mistakes from a neutral and detached source.[3]Apart from learners' confidence in writing, their motivation and attitudes will also improve through the use of computers.[2]
Foreign language learners' awareness of the conventions in writing can be improved through reference to guidelines showing the features and structure of the target genre.[2]At the same time, interactions and feedback help to engage the learners and expedite their learning, especially with active participation.[5]In online writing situations, learners are isolated without face-to-face interaction with others. Therefore, a foreign language writing aid should provide interaction and feedback so as to ease the learning process. This complementscommunicative language teaching(CLT); which is a teaching approach that highlights interaction as both the means and aim of learning a language.
In accordance with the simple view of writing, both lower-order and higher-order skills are required. Lower-order skills involve those ofspellingandtranscription, whereas higher order-skills involve that of ideation; which refers to idea generation and organisation.[6]Proofreading is helpful for non-native language users in minimising errors while writing in a foreign language.Spell checkersandgrammar checkersare two applications that aid in the automatic proofreading process of written work.[7]
To achieve writing competence in a non-native language, especially in an alphabetic language, spelling proficiency is of utmost importance.[8]Spelling proficiency has been identified as a good indicator of a learner’s acquisition and comprehension of alphabetic principles in the target language.[9]Documented data on misspelling patterns indicate that majority of misspellings fall under the four categories of letter insertion, deletion, transposition and substitution.[10]In languages where pronunciation of certain sequences of letters may be similar, misspellings may occur when the non-native language learner relies heavily on the sounds of the target language because they are unsure about the accurate spelling of the words.[11]The spell checker application is a type of writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.[12]
In general, spell checkers can operate one of two modes, the interactive spell checking mode or the batch spell checking.[7]In the interactive mode, the spell checker detects and marks misspelled words with a squiggly underlining as the words are being typed. On the other hand, batch spell checking is performed on a batch-by-batch basis as the appropriate command is entered. Spell checkers, such as those used inMicrosoft Word, can operate in either mode.
Although spell checkers are commonplace in numerous software products, errors specifically made by learners of a target language may not be sufficiently catered for.[13]This is because generic spell checkers function on the assumption that their users are competent speakers of the target language, whose misspellings are primarily due to accidental typographical errors.[14]The majority of misspellings were found to be attributed to systematic competence errors instead of accidental typographical ones, with up to 48% of these errors failing to be detected or corrected by the generic spell checker used.[15]
In view of the deficiency of generic spell checkers, programs have been designed to gear towards non-native misspellings,[14]such as FipsCor and Spengels. In FipsCor, a combination of methods, such as the alpha-code method, phonological reinterpretation method and morphological treatment method, has been adopted in an attempt to create a spell checker tailored to French language learners.[11]On the other hand, Spengels is a tutoring system developed to aid Dutch children and non-native Dutch writers of English in accurate English spelling.[16]
Grammar(syntactical and morphological) competency is another indicator of a non-native speaker’s proficiency in writing in the target language.Grammar checkersare a type of computerised application which non-native speakers can make use of to proofread their writings as such programs endeavor to identify syntactical errors.[17]Grammar and style checking is recognized as one of the seven major applications ofNatural Language Processingand every project in this field aims to build grammar checkers into a writing aid instead of a robust man-machine interface.[17]
Currently, grammar checkers are incapable of inspecting the linguistic or even syntactic correctness of text as a whole. They are restricted in their usefulness in that they are only able to check a small fraction of all the possible syntactic structures. Grammar checkers are unable to detect semantic errors in a correctly structured syntax order; i.e. grammar checkers do not register the error when the sentence structure is syntactically correct but semantically meaningless.[18]
Although grammar checkers have largely been concentrated on ensuring grammatical writing, majority of them are modelled after native writers, neglecting the needs of non-native language users.[19]Much research have attempted to tailor grammar checkers to the needs of non-native language users. Granska, a Swedish grammar checker, has been greatly worked upon by numerous researchers in the investigation of grammar checking properties for foreign language learners.[19][20]TheUniversidad Nacional de Educación a Distanciahas a computerised grammar checker for native Spanish speakers of EFL to help identify and correct grammatical mistakes without feedback from teachers.[21]
Theoretically, the functions of a conventional spell checker can be incorporated into a grammar checker entirely and this is likely the route that the language processing industry is working towards.[18]In reality, internationally available word processors such as Microsoft Word have difficulties combining spell checkers and grammar checkers due to licensing issues; various proofing instrument mechanisms for a certain language would have been licensed under different providers at different times.[18]
Electronic corpora in the target language provide non-native language users with authentic examples of language use rather than fixed examples, which may not be reflected in daily interactions.[22]The contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows them to grasp the manner of sentence formation in the target language, enabling effective writing.[23]
Concordanceset up through concordancing programs of corpora allow non-native language users to conveniently grasp lexico-grammatical patterns of the target language. Collocational frequencies of words (i.e. word pairings frequencies) provide non-native language users with information about accurate grammar structures which can be used when writing in the target language.[22]Collocational information also enable non-native language users to make clearer distinctions between words and expressions commonly regarded as synonyms. In addition, corpora information about thesemantic prosody; i.e. appropriate choices of words to be used in positive and negative co-texts, is available as reference for non-native language users in writing. The corpora can also be used to check for the acceptability or syntactic "grammaticality" of their written work.[24]
A survey conducted onEnglish as a Second Language(ESL) students revealed corpus activities to be generally well received and thought to be especially useful for learning word usage patterns and improving writing skills in the foreign language.[23]It was also found that students' writings became more natural after using two online corpora in a 90-minute training session.[25]In recent years, there were also suggestions to incorporate the applications of corpora into EFL writing courses in China to improve the writing skills of learners.[26]
Dictionaries of the target learning languages are commonly recommended to non-native language learners.[27]They serve as reference tools by offering definitions, phonetic spelling, word classes and sample sentences.[22]It was found that the use of a dictionary can help learners of a foreign language write better if they know how to use them.[28]Foreign language learners can make use of grammar-related information from the dictionary to select appropriate words, check the correct spelling of a word and look upsynonymsto add more variety to their writing.[28]Nonetheless, learners have to be careful when using dictionaries as the lexical-semantic information contained in dictionaries might not be sufficient with regards to language production in a particular context and learners may be misled into choosing incorrect words.[29]
Presently, many notable dictionaries are available online and basic usage is usually free. These online dictionaries allow learners of a foreign language to find references for a word much faster and more conveniently than with a manual version, thus minimising the disruption to the flow of writing.[30]Online dictionaries available can be found under thelist of online dictionaries.
Dictionaries come in different levels of proficiency; such as advanced, intermediate and beginner, which learners can choose accordingly to the level best suited to them. There are many different types of dictionaries available; such as thesaurus or bilingual dictionaries, which cater to the specific needs of a learner of a foreign language. In recent years, there is also specialised dictionaries for foreign language learners that employ natural language processing tools to assist in the compilations of dictionary entries by generating feedback on the vocabulary that learners use and automatically providing inflectional and/or derivational forms for referencing items in the explanations.[31]
The wordthesaurusmeans 'treasury' or 'storehouse' in Greek and Latin is used to refer to several varieties of language resources, it is most commonly known as a book that groups words in synonym clusters and related meanings.[32]Its original sense of 'dictionary or encyclopedia' has been overshadowed by the emergence of the Roget-style thesaurus[32]and it is considered as a writing aid as it helps writers with the selection of words.[33]The differences between a Roget-style thesaurus and a dictionary would be the indexing and information given; the words in thesaurus are grouped by meaning, usually without definitions, while the latter is byalphabetical orderwith definitions.[33]When users are unable to find a word in a dictionary, it is usually due to the constraint of searching alphabetically by common and well-known headwords and the use of a thesaurus eliminates this issue by allowing users to search for a word through another word based on concept.[34]
Foreign language learners can make use of thesaurus to find near synonyms of a word to expand their vocabulary skills and add variety to their writing. Many word processors are equipped with a basic function of thesaurus, allowing learners to change a word to another similar word with ease. However, learners must be mindful that even if the words are near synonyms, they might not be suitable replacements depending on the context.[33]
Spelling dictionaries are referencing materials that specifically aid users in finding the correct spelling of a word. Unlike common dictionaries, spelling dictionaries do not typically provide definitions and other grammar-related information of the words. While typical dictionaries can be used to check or search for correct spellings, new and improved spelling dictionaries can assist users in finding the correct spelling of words even when the user does not know the first alphabet or knows it imperfectly.[35]This circumvents the alphabetic ordering limitations of a classic dictionary.[34]These spelling dictionaries are especially useful to foreign language learners as inclusion of concise definitions and suggestions for commonly confused words help learners to choose the correct spellings of words that sound alike or are pronounced wrongly by them.[35]
A personal spelling dictionary, being a collection of a single learner’s regularly misspelled words, is tailored to the individual and can be expanded with new entries that the learner does not know how to spell or contracted when the learner had mastered the words.[36]Learners also use the personal spelling dictionary more than electronic spellcheckers, and additions can be easily made to better enhance it as a learning tool as it can include things like rules for writing and proper nouns, which are not included in electronic spellcheckers.[36]Studies also suggest that personal spelling dictionaries are better tools for learners to improve their spelling as compared to trying to memorize words that are unrelated from lists or books.[37]
Current research have shown that language learners utilise dictionaries predominantly to check for meanings and thatbilingual dictionariesare preferred over monolingual dictionaries for these uses.[38]Bilingual dictionaries have proved to be helpful for learners of a new language, although in general, they hold less extensive coverage of information as compared to monolingual dictionaries.[30]Nonetheless, good bilingual dictionaries capitalize on the fact that they are useful for learners to integrate helpful information about commonly known errors, false friends and contrastive predicaments from the two languages.[30]
Studies have shown that learners of English have benefited from the use of bilingual dictionaries on their production and comprehension of unknown words.[39]When using bilingual dictionaries, learners also tend to read entries in both native and target languages[39]and this helps them to map the meanings of the target word in the foreign language onto its counterpart in their native language. It was also found that the use of bilingual dictionaries improves the results of translation tasks by learners of ESL, thus showing that language learning can be enhanced with the use of bilingual dictionaries.[40]
The use of bilingual dictionaries in foreign language writing tests remains a debate. Some studies support the view that the use of a dictionary in a foreign language examination increases the mean score of the test, and hence is one of the factors that influenced the decision to ban the use of dictionaries in several foreign language tests in the UK.[41]More recent studies, however, present that further research into the use of bilingual dictionaries during writing tests have shown that there is no significant differences in the test scores that can be attributed to the use of a dictionary.[42]Nevertheless, from the perspective of foreign language learners, being able to use a bilingual dictionary during a test is reassuring and increases their confidence.[43]
There are many free translation aids online, also known asmachine translation(MT) engines, such asGoogle TranslateandBabel Fish(now defunct), that allow foreign language learners to translate between their native language and the target language quickly and conveniently.[44]Out of the three major categories in computerised translation tools;computer-assisted translation(CAT), Terminology data banks and machine translation. Machine translation is the most ambitious as it is designed to handle the whole process of translation entirely without the intervention of human assistance.[45]
Studies have shown that translation into the target language can be used to improve the linguistic proficiency of foreign language learners.[46]Machine translation aids help beginner learners of a foreign language to write more and produce better quality work in the target language; writing directly in the target language without any aid requires more effort on the learners' part, resulting in the difference in quantity and quality.[44]
However, teachers advise learners against the use of machine translation aids as output from the machine translation aids are highly misleading and unreliable; producing the wrong answers most of the time.[47]Over-reliance on the aids also hinder the development of learners' writing skills, and is viewed as an act of plagiarism since the language used is technically not produced by the student.[47]
Theorthographyof a language is the usage of a specific script to write a language according to a conventionalised usage.[48]One’s ability to read in a language is further enhanced by a concurrent learning of writing.[49]This is because writing is a means of helping the language learner recognise and remember the features of the orthography, which is particularly helpful when the orthography has irregular phonetic-to-spelling mapping.[49]This, in turn, helps the language learner to focus on the components which make up the word.[49]
Online orthography aids[50]provide language learners with a step-by-step process on learning how to write characters. These are especially useful for learners of languages withlogographicwriting systems, such as Chinese or Japanese, in which the ordering of strokes for characters are important. Alternatively, tools like Skritter provide an interactive way of learning via a system similar to writing tablets[51][better source needed]albeit on computers, at the same time providing feedback on stroke ordering and progress.
Handwriting recognitionis supported on certain programs,[52]which help language learners in learning the orthography of the target language. Practice of orthography is also available in many applications, with tracing systems in place to help learners with stroke orders.[53]
Apart from online orthography programs, offline orthography aids for language learners of logographic languages are also available. Character cards, which contain lists of frequently used characters of the target language, serve as a portable form of visual writing aid for language learners of logographic languages who may face difficulties in recalling the writing of certain characters.[54]
Studies have shown that tracing logographic characters improves the word recognition abilities of foreign language learners, as well as their ability to map the meanings onto the characters.[55]This, however, does not improve their ability to link pronunciation with characters, which suggests that these learners need more than orthography aids to help them in mastering the language in both writing and speech.[56]
|
https://en.wikipedia.org/wiki/Foreign-language_writing_aid
|
The following is a list of current and past, non-classified notableartificial intelligenceprojects.
|
https://en.wikipedia.org/wiki/List_of_natural_language_processing_projects
|
TheLRE Map(Language Resources and Evaluation) is a freely accessible large database on resources dedicated toNatural language processing. The original feature of LRE Map is that the records are collected during the submission of different majorNatural language processingconferences. The records are then cleaned and gathered into a global database called "LRE Map".[1]
The LRE Map is intended to be an instrument for collecting information about language resources and to become, at the same time, a community for users, a place to share and discover resources, discuss opinions, provide feedback, discover new trends, etc. It is an instrument for discovering, searching and documenting language resources, here intended in a broad sense, as both data and tools.
The large amount of information contained in the Map can be analyzed in many different ways. For instance, the LRE Map can provide information about the most frequent type of resource, the most represented language, the applications for which resources are used or are being developed, the proportion of new resources vs. already existing ones, or the way in which resources are distributed to the community.
Several institutions worldwide maintain catalogues of language resources
(ELRA,LDC,NICTUniversal Catalogue,ACLData and Code Repository,OLAC, LT World, etc.)[2]However, it has been estimated that only 10% of existing resources are known, either through distribution catalogues or via direct publicity by providers (web sites and the like). The rest remains hidden, the only occasions where it briefly emerges being when a resource is presented in the context of a research paper or report at some conference. Even in this case, nevertheless, it might be that a resource remains in the background simply because the focus of the research is not on the resourceper se.
The LRE Map originated under the name "LREC Map" during the preparation ofLREC2010 conference.[3]More specifically, the idea was discussed within the FlaReNet project, and in collaboration withELRAand theInstitute of Computational Linguistics of CNR in Pisa, the Map was put in place at LREC 2010.[4]The LREC organizers asked the authors to provide some basic information about all the resources (in a broad sense, i.e. including tools, standards and evaluation packages), either used or created, described in their papers. All these descriptors were then gathered in a global matrix called the LREC Map.
The same methodology and requirements from the authors has been then applied and extended to other conferences, namely COLING-2010,[5]EMNLP-2010,[6]RANLP-2011,[7]LREC 2012,[8]LREC 2014[9]and LREC 2016.[10]After this generalization to other conferences, the LREC Map has been renamed as theLRE Map.
The size of the database increases over time. The data collected amount to 4776 entries.
Each resource is described according to the following attributes:
The LRE map is a very important tool to chart the NLP field. Compared to other studied based on subjective scorings, the LRE map is made of real facts.
The map has a great potential for many uses, in addition to being an information gathering tool:
The data were then cleaned and sorted byJoseph Mariani(CNRS-LIMSI IMMI) andGil Francopoulo(CNRS-LIMSI IMMI + Tagmatica) in order to compute the various matrices of the final FLaReNet[11]reports. One of them, the matrix for written data at LREC 2010 is as follows:
English is the most studied language. Secondly, come French and German languages and then Italian and Spanish.
The LRE Map has been extended to Language Resources and Evaluation Journal[12]and other conferences.
|
https://en.wikipedia.org/wiki/LRE_Map
|
Aspoken dialog system(SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written textdialog system: aspeech recognizerand atext-to-speechmodule (written text dialog systems usually use other input systems provided by an OS). It can be further distinguished fromcommand and controlspeech systems that can respond to requests but do not attempt to maintain continuity over time.
Spoken dialog systems vary in their complexity. Directed dialog systems are very simple and require that the developer create a graph (typically a tree) that manages the task but may not correspond to the needs of the user. Information access systems, typically based on forms, allow users some flexibility (for example in the order in which retrieval constraints are specified, or in the use of optional constraints) but are limited in their capabilities. Problem-solving dialog systems may allow human users to engage in a number of different activities that may include information access, plan construction and possible execution of the latter.
Some examples of systems include:
Pionieers in dialogue systems are companies likeAT&T(with its speech recognizer system in the Seventies) andCSELTlaboratories, that led some European research projects during the Eighties (e.g. SUNDIAL) after the end of the DARPA project in the US.
The field of spoken dialog systems is quite large and includes research (featured at scientific conferences such asSIGdialandInterspeech) and a large industrial sector (with its own meetings such asSpeechTekandAVIOS).
The following might provide good technical introductions:
|
https://en.wikipedia.org/wiki/Spoken_dialogue_system
|
Transderivational search(often abbreviated toTDS) is apsychologicalandcyberneticsterm, meaning when a search is being conducted for afuzzymatch across a broad field. Incomputingthe equivalent function can be performed usingcontent-addressable memory.
Unlike usual searches, which look for literal (i.e. exact,logical, orregular expression) matches, a transderivational search is a search for a possible meaning or possible match as part of communication, and without which an incoming communication cannot be made any sense of whatsoever. It is thus an integral part of processinglanguage, and of attachingmeaningtocommunication.
In NLP (Neuro-liguistic Programming), atransderivational search(Bandler and Grinder, 1976) is essentially the process ofsearching backthrough one's stored memories and mental representations to find the personal reference experiences from which a current understanding or mental map has been derived.[1]
By the end of 1976, Grinder and Bandler had combined Satir’s and Perls’ language patterns and Erickson’s hypnotic language and use of metaphor with anchoring to create new processes that they called collapsing anchors, trans-derivational search, changing personal history, and reframing.[2]
A psychological example of TDS is inEricksonianhypnotherapy, where vague suggestions are used that the patient must process intensely in order to find their own meanings, thus ensuring that the practitioner does not intrude his own beliefs into the subject's inner world.
Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or interrupted, in order to create, or deepen, trance.
TDS is a fundamental part of human language and cognitive processing. Arguably, every word or utterance a person hears, for example, and everything they see or feel and take note of, results in a very brief trance while TDS is carried out to establish a contextual meaning for it.
Leading statements:
Textual ambiguity:
Although TDS is often associated with spoken language, it can be induced in any perceptual system. Thus Milton Erickson's "hypnotic handshake" is a technique that leaves the other person performing TDS in search of meaning to a deliberately ambiguous use of touch.
|
https://en.wikipedia.org/wiki/Transderivational_search
|
Asearch engineis asoftware systemthat provideshyperlinkstoweb pagesand other relevant information onthe Webin response to a user'squery. The userinputsa query within aweb browseror amobile app, and thesearch resultsare often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.
For a search provider, itsengineis part of adistributed computingsystem that can encompass manydata centersthroughout the world. The speed and accuracy of an engine's response to a query is based on a complex system ofindexingthat is continuously updated by automatedweb crawlers. This can includedata miningthefilesanddatabasesstored onweb servers, but some content isnot accessibleto crawlers.
There have been many search engines since the dawn of the Web in the 1990s, butGoogle Searchbecame the dominant one in the 2000s and has remained so. It currently has a 90% global market share.[1][2]The business ofwebsitesimproving their visibility insearch results, known asmarketingandoptimization, has thus largely focused on Google.
In 1945,Vannevar Bushdescribed an information retrieval system that would allow a user to access a great expanse of information, all at a single desk.[3]He called it amemex. He described the system in an article titled "As We May Think" that was published inThe Atlantic Monthly.[4]The memex was intended to give a user the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. Vannevar Bush envisioned libraries of research with connected annotations, which are similar to modernhyperlinks.[5]
Link analysiseventually became a crucial component of search engines through algorithms such asHyper SearchandPageRank.[6][7]
The first internet search engines predate the debut of the Web in December 1990:WHOISuser search dates back to 1982,[8]and theKnowbot Information Servicemulti-network user search was first implemented in 1989.[9]The first well documented search engine that searched content files, namelyFTPfiles, wasArchie, which debuted on 10 September 1990.[10]
Prior to September 1993, theWorld Wide Webwas entirely indexed by hand. There was a list ofwebserversedited byTim Berners-Leeand hosted on theCERNwebserver. One snapshot of the list in 1992 remains,[11]but as more and more web servers went online the central list could no longer keep up. On theNCSAsite, new servers were announced under the title "What's New!".[12]
The first tool used for searching content (as opposed to users) on theInternetwasArchie.[13]The name stands for "archive" without the "v".[14]It was created byAlan Emtage,[14][15][16][17]computer sciencestudent atMcGill UniversityinMontreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchabledatabaseof file names; however,Archie Search Enginedid not index the contents of these sites since the amount of data was so limited it could be readily searched manually.
The rise ofGopher(created in 1991 byMark McCahillat theUniversity of Minnesota) led to two new search programs,VeronicaandJughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to theArchie comic bookseries, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor.
In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand.Oscar Nierstraszat theUniversity of Genevawrote a series ofPerlscripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis forW3Catalog, the web's first primitive search engine, released on September 2, 1993.[18]
In June 1993, Matthew Gray, then atMIT, produced what was probably the firstweb robot, thePerl-basedWorld Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engineAliwebappeared in November 1993. Aliweb did not use aweb robot, but instead depended on being notified bywebsite administratorsof the existence at each site of an index file in a particular format.
JumpStation(created in December 1993[19]byJonathon Fletcher) used aweb robotto find web pages and to build its index, and used aweb formas the interface to its query program. It was thus the firstWWWresource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in theweb pagesthe crawler encountered.
One of the first "all text" crawler-based search engines wasWebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in anyweb page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994,Lycos(which started atCarnegie Mellon University) was launched and became a major commercial endeavor.
The first popular search engine on the Web wasYahoo! Search.[20]The first product fromYahoo!, founded byJerry YangandDavid Filoin January 1994, was aWeb directorycalledYahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory.[21][22]It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages.
Soon after, a number of search engines appeared and vied for popularity. These includedMagellan,Excite,Infoseek,Inktomi,Northern Light, andAltaVista. Information seekers could also browse the directory instead of doing a keyword-based search.
In 1996,Robin Lideveloped theRankDexsite-scoringalgorithmfor search engines results page ranking[23][24][25]and received a US patent for the technology.[26]It was the first search engine that usedhyperlinksto measure the quality of websites it was indexing,[27]predating the very similar algorithm patent filed byGoogletwo years later in 1998.[28]Larry Pagereferenced Li's work in some of his U.S. patents for PageRank.[29]Li later used his Rankdex technology for theBaidusearch engine, which was founded by him in China and launched in 2000.
In 1996,Netscapewas looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.[30][31]
Googleadopted the idea of selling search terms in 1998 from a small search engine company namedgoto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet.[32][33]
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s.[34]Several companies entered the market spectacularly, receiving record gains during theirinitial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in thedot-com bubble, a speculation-driven market boom that peaked in March 2000.
Around 2000,Google's search enginerose to prominence.[35]The company achieved better results for many searches with an algorithm calledPageRank, as was explained in the paperAnatomy of a Search Enginewritten bySergey BrinandLarry Page, the later founders of Google.[7]Thisiterative algorithmranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Larry Page's patent for PageRank citesRobin Li's earlierRankDexpatent as an influence.[29][25]Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in aweb portal. In fact, the Google search engine became so popular that spoof engines emerged such asMystery Seeker.
By 2000,Yahoo!was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, andOverture(which ownedAlltheWeband AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions.
Microsoftfirst launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings fromLooksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004,Microsoftbegan a transition to its own search technology, powered by its ownweb crawler(calledmsnbot).
Microsoft's rebranded search engine,Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in whichYahoo! Searchwould be powered by Microsoft Bing technology.
As of 2019,[update]active search engine crawlers include those of Google,Sogou, Baidu, Bing,Gigablast,Mojeek,DuckDuckGoandYandex.
A search engine maintains the following processes in near real time:[36]
Web search engines get their information byweb crawlingfrom site to site. The "spider" checks for the standard filenamerobots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to beindexeddepending on many factors, such as the titles, page content,JavaScript,Cascading Style Sheets(CSS), headings, or itsmetadatain HTMLmeta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially".[38]
Indexing means associating words and other definable tokens found on web pages to their domain names andHTML-based fields. The associations are stored in a public database and accessible through web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible.[37]Some of the techniques for indexing, andcachingare trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis.
Between visits by thespider, thecachedversion of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as aweb proxyinstead. In this case, the page may differ from the search terms indexed.[37]The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form oflinkrot.
Typically when a user enters aqueryinto a search engine it is a fewkeywords.[39]Theindexalready has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must beweightedaccording to information in the indexes.[37]Then the top search result item requires the lookup, reconstruction, and markup of thesnippetsshowing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing.
Beyond simple keyword lookups, search engines offer their ownGUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create byfilteringandweightingwhile refining the search results, given the initial pages of the first search results.
For example, from 2007 the Google.com search engine has allowed one tofilterby date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range.[40]It is also possible toweightby date because each page has a modification time. Most search engines support the use of theBoolean operatorsAND, OR and NOT to help end users refine thesearch query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature calledproximity search, which allows users to define the distance between keywords.[37]There is alsoconcept-based searchingwhere the research involves using statistical analysis on pages containing the words or phrases you search for.
The usefulness of a search engine depends on therelevanceof theresult setit gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods torankthe results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another.[37]The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported byadvertisingrevenue and thus some of them allow advertisers tohave their listings ranked higherin search results for a fee. Search engines that do not accept money for their search results make money by runningsearch related adsalongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.[41]
Local searchis the process that optimizes the efforts of local businesses. They focus on change to make sure all searches are consistent. It is important because many people determine where they plan to go and what to buy based on their searches.[42]
As of January 2022,[update]Googleis by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines wereBingat 4%,Yandexat 2%,Yahoo!at 1%. Other search engines not listed have less than a 3% market share.[2]In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice.[43]
In Russia,Yandexhas a market share of 62.6%, compared to Google's 28.3%. Yandex is the second most used search engine on smartphones in Asia and Europe.[44]In China, Baidu is the most popular search engine.[45]South Korea-based search portalNaveris used for 62.8% of online searches in the country.[46]Yahoo! JapanandYahoo! Taiwanare the most popular choices for Internet searches in Japan and Taiwan, respectively.[47]China is one of few countries where Google is not in the top three web search engines for market share. Google was previously more popular in China, but withdrew significantly after a disagreement with the government over censorship and a cyberattack. Bing, however, is in the top three web search engines with a market share of 14.95%. Baidu is top with 49.1% of the market share.[48][failed verification]
Most countries' markets in the European Union are dominated by Google, except for theCzech Republic, whereSeznamis a strong competitor.[49]
The search engineQwantis based inParis,France, where it attracts most of its 50 million monthly registered users from.
Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide[50][51]and the underlying assumptions about the technology.[52]These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in itsorganic searchresults), and political processes (e.g., the removal of search results to comply with local laws).[53]For example, Google will not surface certainneo-Naziwebsites in France and Germany, whereHolocaust denialis illegal.
Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results.[54]Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries.[51]
Google Bombingis one example of an attempt to manipulate search results for political, social or commercial reasons.
Several scholars have studied the cultural changes triggered by search engines,[55]and the representation of certain controversial topics in their results, such asterrorism in Ireland,[56]climate change denial,[57]andconspiracy theories.[58]
There has been concern raised that search engines such as Google and Bing provide customized results based on the user's activity history, leading to what has been termed echo chambers orfilter bubblesbyEli Pariserin 2011.[59]The argument is that search engines and social media platforms usealgorithmsto selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint. According toEli Pariserusers get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users, such asDuckDuckGo. However many scholars have questioned Pariser's view, finding that there is little evidence for the filter bubble.[60][61][62]On the contrary, a number of studies trying to verify the existence of filter bubbles have found only minor levels of personalisation in search,[62]that most people encounter a range of views when browsing online, and that Google news tends to promote mainstream established news outlets.[63][61]
The global growth of the Internet and electronic media in theArabandMuslimworld during the last decade has encouraged Islamic adherents inthe Middle EastandAsian sub-continent, to attempt their own search engines, their own filtered search portals that would enable users to performsafe searches. More than usualsafe searchfilters, these Islamic web portals categorizing websites into being either "halal" or "haram", based on interpretation ofSharia law.ImHalalcame online in September 2011.Halalgooglingcame online in July 2013. These useharamfilters on the collections fromGoogleandBing(and others).[64]
While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects likeMuxlim(a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google,[65]and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith.[66]
Web search engine submission is a process in which a webmaster submits a website directly to a search engine. While search engine submission is sometimes presented as a way to promote a website, it generally is not necessary because the major search engines use web crawlers that will eventually find most web sites on the Internet without assistance. They can either submit one web page at a time, or they can submit the entire site using asitemap, but it is normally only necessary to submit thehome pageof a web site as search engines are able to crawl a well designed website. There are two remaining reasons to submit a web site or web page to a search engine: to add an entirely new web site without waiting for a search engine to discover it, and to have a web site's record updated after a substantial redesign.
Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website'sranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller ofGooglehas stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking.[67]
In comparison to search engines, a social bookmarking system has several advantages over traditional automated resource location and classification software, such assearch enginespiders. All tag-based classification of Internet resources (such as web sites) is done by human beings, who understand the content of the resource, as opposed to software, which algorithmically attempts to determine the meaning and quality of a resource. Also, people can find andbookmark web pagesthat have not yet been noticed or indexed by web spiders.[68]Additionally, a social bookmarking system can rank a resource based on how many times it has been bookmarked by users, which may be a more usefulmetricforend-usersthan systems that rank resources based on the number of external links pointing to it. However, both types of ranking are vulnerable to fraud, (seeGaming the system), and both need technical countermeasures to try to deal with this.
The first web search engine wasArchie, created in 1990[69]byAlan Emtage, a student atMcGill Universityin Montreal. The author originally wanted to call the program "archives", but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on.
The primary method of storing and retrieving files was via theFile Transfer Protocol(FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol.
Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them.
Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file.
Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database.[70]
In 1993, the University of Nevada System Computing Services group developedVeronica.[69]It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges.[70]
TheWorld Wide Web Wanderer, developed by Matthew Gray in 1993[71]was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database.
Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained.
In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways.
ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos.[70]
Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet.
Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers.[70]
Excite was the first serious commercial search engine which launched in 1995.[72]It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt andInfoSpacebought Excite for $10 million.
Some of the first analysis of web searching was conducted on search logs from Excite[73][39]
In April 1994, two Stanford University Ph.D. candidates,David FiloandJerry Yang, created some pages that became rather popular. They called the collection of pagesYahoo!Their official explanation for the name choice was that they considered themselves to be a pair of yahoos.
As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory.
The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites.
At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU, developed theLycossearch engine.
Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks.[74]
The process begins when a user enters a query statement into the system through the interface provided.
There are basically three types of search engines: Those that are powered by robots (calledcrawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.
Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.
Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.
In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.
So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine therelevanceof the information in the index to what the user is searching for.
One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing.
Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking.
Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g.Google), database or structured data search engines (e.g.Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google andYahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity.
Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings.[75]
|
https://en.wikipedia.org/wiki/Search_engine
|
Inlinguistic morphologyand information retrieval,stemmingis the process of reducing inflected (or sometimes derived) words to theirword stem, base orrootform—generally a written word form. The stem need not be identical to themorphological rootof the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root.Algorithmsfor stemming have been studied incomputer sciencesince the 1960s. Manysearch enginestreat words with the same stem assynonymsas a kind ofquery expansion, a process called conflation.
Acomputer programor subroutine that stems word may be called astemming program,stemming algorithm, orstemmer.
A stemmer for English operating on the stemcatshould identify suchstringsascats,catlike, andcatty. A stemming algorithm might also reduce the wordsfishing,fished, andfisherto the stemfish. The stem need not be a word, for example the Porter algorithm reducesargue,argued,argues,arguing, andargusto the stemargu.
The first published stemmer was written byJulie Beth Lovinsin 1968.[1]This paper was remarkable for its early date and had great influence on later work in this area.[citation needed]Her paper refers to three earlier major attempts at stemming algorithms, by ProfessorJohn W. TukeyofPrinceton University, the algorithm developed atHarvard UniversitybyMichael Lesk, under the direction of ProfessorGerard Salton, and a third algorithm developed by James L. Dolby of R and D Consultants, Los Altos, California.
A later stemmer was written byMartin Porterand was published in the July 1980 issue of the journalProgram. This stemmer was very widely used and became the de facto standard algorithm used for English stemming. Dr. Porter received theTony Kent Strix awardin 2000 for his work on stemming and information retrieval.
Many implementations of the Porter stemming algorithm were written and freely distributed; however, many of these implementations contained subtle flaws. As a result, these stemmers did not match their potential. To eliminate this source of error, Martin Porter released an officialfree software(mostlyBSD-licensed) implementation[2]of the algorithm around the year 2000. He extended this work over the next few years by buildingSnowball, a framework for writing stemming algorithms, and implemented an improved English stemmer together with stemmers for several other languages.
The Paice-Husk Stemmer was developed byChris D Paiceat Lancaster University in the late 1980s, it is an iterative stemmer and features an externally stored set of stemming rules. The standard set of rules provides a 'strong' stemmer and may specify the removal or replacement of an ending. The replacement technique avoids the need for a separate stage in the process to recode or provide partial matching. Paice also developed a direct measurement for comparing stemmers based on counting the over-stemming and under-stemming errors.
There are several types of stemming algorithms which differ in respect to performance and accuracy and how certain stemming obstacles are overcome.
A simple stemmer looks up the inflected form in alookup table. The advantages of this approach are that it is simple, fast, and easily handles exceptions. The disadvantages are that all inflected forms must be explicitly listed in the table: new or unfamiliar words are not handled, even if they are perfectly regular (e.g. cats ~ cat), and the table may be large. For languages with simple morphology, like English, table sizes are modest, but highly inflected languages like Turkish may have hundreds of potential inflected forms for each root.
A lookup approach may use preliminarypart-of-speech taggingto avoid overstemming.[3]
The lookup table used by a stemmer is generally produced semi-automatically. For example, if the word is "run", then the inverted algorithm might automatically generate the forms "running", "runs", "runned", and "runly". The last two forms are valid constructions, but they are unlikely.[citation needed].
Suffix stripping algorithms do not rely on a lookup table that consists of inflected forms and root form relations. Instead, a typically smaller list of "rules" is stored which provides a path for the algorithm, given an input word form, to find its root form. Some examples of the rules include:
Suffix stripping approaches enjoy the benefit of being much simpler to maintain than brute force algorithms, assuming the maintainer is sufficiently knowledgeable in the challenges of linguistics and morphology and encoding suffix stripping rules. Suffix stripping algorithms are sometimes regarded as crude given the poor performance when dealing with exceptional relations (like 'ran' and 'run'). The solutions produced by suffix stripping algorithms are limited to thoselexical categorieswhich have well known suffixes with few exceptions. This, however, is a problem, as not all parts of speech have such a well formulated set of rules.Lemmatisationattempts to improve upon this challenge.
Prefix stripping may also be implemented. Of course, not all languages use prefixing or suffixing.
Suffix stripping algorithms may differ in results for a variety of reasons. One such reason is whether the algorithm constrains whether the output word must be a real word in the given language. Some approaches do not require the word to actually exist in the language lexicon (the set of all words in the language). Alternatively, some suffix stripping approaches maintain a database (a large list) of all known morphological word roots that exist as real words. These approaches check the list for the existence of the term prior to making a decision. Typically, if the term does not exist, alternate action is taken. This alternate action may involve several other criteria. The non-existence of an output term may serve to cause the algorithm to try alternate suffix stripping rules.
It can be the case that two or more suffix stripping rules apply to the same input term, which creates an ambiguity as to which rule to apply. The algorithm may assign (by human hand or stochastically) a priority to one rule or another. Or the algorithm may reject one rule application because it results in a non-existent term whereas the other overlapping rule does not. For example, given the English termfriendlies, the algorithm may identify theiessuffix and apply the appropriate rule and achieve the result offriendl.Friendlis likely not found in the lexicon, and therefore the rule is rejected.
One improvement upon basic suffix stripping is the use of suffix substitution. Similar to a stripping rule, a substitution rule replaces a suffix with an alternate suffix. For example, there could exist a rule that replacesieswithy. How this affects the algorithm varies on the algorithm's design. To illustrate, the algorithm may identify that both theiessuffix stripping rule as well as the suffix substitution rule apply. Since the stripping rule results in a non-existent term in the lexicon, but the substitution rule does not, the substitution rule is applied instead. In this example,friendliesbecomesfriendlyinstead offriendl'.
Diving further into the details, a common technique is to apply rules in a cyclical fashion (recursively, as computer scientists would say). After applying the suffix substitution rule in this example scenario, a second pass is made to identify matching rules on the termfriendly, where thelystripping rule is likely identified and accepted. In summary,friendliesbecomes (via substitution)friendlywhich becomes (via stripping)friend.
This example also helps illustrate the difference between a rule-based approach and a brute force approach. In a brute force approach, the algorithm would search forfriendliesin the set of hundreds of thousands of inflected word forms and ideally find the corresponding root formfriend. In the rule-based approach, the three rules mentioned above would be applied in succession to converge on the same solution. Chances are that the brute force approach would be slower, as lookup algorithms have a direct access to the solution, while rule-based should try several options, and combinations of them, and then choose which result seems to be the best.
A more complex approach to the problem of determining a stem of a word islemmatisation. This process involves first determining thepart of speechof a word, and applying different normalization rules for each part of speech. The part of speech is first detected prior to attempting to find the root since for some languages, the stemming rules change depending on a word's part of speech.
This approach is highly conditional upon obtaining the correct lexical category (part of speech). While there is overlap between the normalization rules for certain categories, identifying the wrong category or being unable to produce the right category limits the added benefit of this approach over suffix stripping algorithms. The basic idea is that, if the stemmer is able to grasp more information about the word being stemmed, then it can apply more accurate normalization rules (which unlike suffix stripping rules can also modify the stem).
Stochasticalgorithms involve using probability to identify the root form of a word. Stochastic algorithms are trained (they "learn") on a table of root form to inflected form relations to develop a probabilistic model. This model is typically expressed in the form of complex linguistic rules, similar in nature to those in suffix stripping or lemmatisation. Stemming is performed by inputting an inflected form to the trained model and having the model produce the root form according to its internal ruleset, which again is similar to suffix stripping and lemmatisation, except that the decisions involved in applying the most appropriate rule, or whether or not to stem the word and just return the same word, or whether to apply two different rules sequentially, are applied on the grounds that the output word will have the highest probability of being correct (which is to say, the smallest probability of being incorrect, which is how it is typically measured).
Some lemmatisation algorithms are stochastic in that, given a word which may belong to multiple parts of speech, a probability is assigned to each possible part. This may take into account the surrounding words, called the context, or not. Context-free grammars do not take into account any additional information. In either case, after assigning the probabilities to each possible part of speech, the most likely part of speech is chosen, and from there the appropriate normalization rules are applied to the input word to produce the normalized (root) form.
Some stemming techniques use then-gramcontext of a word to choose the correct stem for a word.[4]
Hybrid approaches use two or more of the approaches described above in unison. A simple example is a suffix tree algorithm which first consults a lookup table using brute force. However, instead of trying to store the entire set of relations between words in a given language, the lookup table is kept small and is only used to store a minute amount of "frequent exceptions" like "ran => run". If the word is not in the exception list, apply suffix stripping or lemmatisation and output the result.
Inlinguistics, the termaffixrefers to either aprefixor asuffix. In addition to dealing with suffixes, several approaches also attempt to remove common prefixes. For example, given the wordindefinitely, identify that the leading "in" is a prefix that can be removed. Many of the same approaches mentioned earlier apply, but go by the nameaffix stripping. A study of affix stemming for several European languages can be found here.[5]
Such algorithms use a stem database (for example a set of documents that contain stem words). These stems, as mentioned above, are not necessarily valid words themselves (but rather common sub-strings, as the "brows" in "browse" and in "browsing"). In order to stem a word the algorithm tries to match it with stems from the database, applying various constraints, such as on the relative length of the candidate stem within the word (so that, for example, the short prefix "be", which is the stem of such words as "be", "been" and "being", would not be considered as the stem of the word "beside").[citation needed].
While much of the early academic work in this area was focused on the English language (with significant use of the Porter Stemmer algorithm), many other languages have been investigated.[6][7][8][9][10]
Hebrew and Arabic are still considered difficult research languages for stemming. English stemmers are fairly trivial (with only occasional problems, such as "dries" being the third-person singular present form of the verb "dry", "axes" being the plural of "axe" as well as "axis"); but stemmers become harder to design as the morphology, orthography, and character encoding of the target language becomes more complex. For example, an Italian stemmer is more complex than an English one (because of a greater number of verb inflections), a Russian one is more complex (more noundeclensions), a Hebrew one is even more complex (due tononconcatenative morphology, a writing system without vowels, and the requirement of prefix stripping: Hebrew stems can be two, three or four characters, but not more), and so on.[11]
Multilingual stemming applies morphological rules of two or more languages simultaneously instead of rules for only a single language when interpreting a search query. Commercial systems using multilingual stemming exist.[citation needed]
There are two error measurements in stemming algorithms, overstemming and understemming. Overstemming is an error where two separate inflected words are stemmed to the same root, but should not have been—afalse positive. Understemming is an error where two separate inflected words should be stemmed to the same root, but are not—afalse negative. Stemming algorithms attempt to minimize each type of error, although reducing one type can lead to increasing the other.
For example, the widely used Porter stemmer stems "universal", "university", and "universe" to "univers". This is a case of overstemming: though these three words areetymologicallyrelated, their modern meanings are in widely different domains, so treating them as synonyms in a search engine will likely reduce the relevance of the search results.
An example of understemming in the Porter stemmer is "alumnus" → "alumnu", "alumni" → "alumni", "alumna"/"alumnae" → "alumna". This English word keeps Latin morphology, and so these near-synonyms are not conflated.
Stemming is used as an approximate method for grouping words with a similar basic meaning together. For example, a text mentioning "daffodils" is probably closely related to a text mentioning "daffodil" (without the s). But in some cases, words with the same morphological stem haveidiomaticmeanings which are not closely related: a user searching for "marketing" will not be satisfied by most documents mentioning "markets" but not "marketing".
Stemmers can be used as elements inquery systemssuch asWebsearch engines. The effectiveness of stemming for English query systems were soon found to be rather limited, however, and this has led earlyinformation retrievalresearchers to deem stemming irrelevant in general.[12]An alternative approach, based on searching forn-gramsrather than stems, may be used instead. Also, stemmers may provide greater benefits in other languages than English.[13][14]
Stemming is used to determine domain vocabularies indomain analysis.[15]
Many commercial companies have been using stemming since at least the 1980s and have produced algorithmic and lexical stemmers in many languages.[16][17]
TheSnowballstemmers have been compared with commercial lexical stemmers with varying results.[18][19]
Google Searchadopted word stemming in 2003.[20]Previously a search for "fish" would not have returned "fishing". Other software search algorithms vary in their use of word stemming. Programs that simply search for substrings will obviously find "fish" in "fishing" but when searching for "fishes" will not find occurrences of the word "fish".
Stemming is used as a task in pre-processing texts before performing text mining analyses on it.
|
https://en.wikipedia.org/wiki/Stemming
|
Thedrinker paradox(also known as thedrinker's theorem, thedrinker's principle, or thedrinking principle) is atheoremofclassicalpredicate logicthat can be stated as "There is someone in the pub such that, if he or she is drinking, then everyone in the pub is drinking." It was popularised by themathematical logicianRaymond Smullyan, who called it the "drinking principle" in his 1978 bookWhat Is the Name of this Book?[1]
The apparentlyparadoxicalnature of the statement comes from the way it is usually stated innatural language. It seems counterintuitive both that there could be a person who iscausingthe others to drink, or that there could be a person such that all through the night that one person were always thelastto drink. The first objection comes from confusingformal "if then"statements with causation (seeCorrelation does not imply causationorRelevance logicfor logics that demand relevant relationships between premise and consequent, unlike classical logic assumed here). The formal statement of the theorem is timeless, eliminating the second objection because the person the statement holds true for at one instant is not necessarily the same person it holds true for at any other instant.[citation needed]
The formal statement of the theorem is
where D is an arbitrarypredicateand P is an arbitrary nonempty set.
The proof begins by recognizing it is true that either everyone in the pub is drinking, or at least one person in the pub is not drinking. Consequently, there are two cases to consider:[1][2]
A slightly more formal way of expressing the above is to say that, if everybody drinks, then anyone can be thewitnessfor the validity of the theorem. And if someone does not drink, then that particular non-drinking individual can be the witness to the theorem's validity.[3]
The paradox is ultimately based on the principle of formal logic that the statementA→B{\displaystyle A\rightarrow B}is true wheneverA{\displaystyle A}is false, i.e., any statement follows from a false statement[1](ex falso quodlibet).
What is important to the paradox is that the conditional in classical (and intuitionistic) logic is thematerial conditional. It has the property thatA→B{\displaystyle A\rightarrow B}is true wheneverB{\displaystyle B}is true orA{\displaystyle A}is false. In classical logic (butnotintuitionistic logic), this is also a necessary condition: ifA→B{\displaystyle A\rightarrow B}is true, thenB{\displaystyle B}is true orA{\displaystyle A}is false.
So as it was applied here, the statement "if they are drinking, everyone is drinking" was taken to be correct in one case, if everyone was drinking, and in the other case, if they were not drinking—even though their drinking may not have had anything to do with anyone else's drinking.
Smullyan in his 1978 book attributes the naming of "The Drinking Principle" to his graduate students.[1]He also discusses variants (obtained by replacing D with other, more dramatic predicates):
As "Smullyan's ‘Drinkers’ principle" or just "Drinkers' principle" it appears inH.P. Barendregt's "The quest for correctness" (1996), accompanied by some machine proofs.[2]Since then it has made regular appearance as an example in publications aboutautomated reasoning; it is sometimes used to contrast the expressiveness ofproof assistants.[4]
In the setting with empty domains allowed, the drinker paradox must be formulated as follows:[5]
A set P satisfies
if and only if it is non-empty.
Or in words:
|
https://en.wikipedia.org/wiki/Drinker_paradox
|
Informal logic,nonfirstorderizabilityis the inability of a natural-language statement to be adequately captured by a formula offirst-order logic. Specifically, a statement isnonfirstorderizableif there is no formula of first-order logic which is true in amodelif and only if the statement holds in that model. Nonfirstorderizable statements are sometimes presented as evidence that first-order logic is not adequate to capture the nuances of meaning in natural language.
The term was coined byGeorge Boolosin his paper "To Be is to Be a Value of a Variable (or to Be Some Values of Some Variables)".[1]Quine argued that such sentences call forsecond-ordersymbolization, which can be interpreted as plural quantification over the same domain as first-order quantifiers use, without postulation of distinct "second-order objects" (properties, sets, etc.).
A standard example is theGeach–Kaplansentence: "Some critics admire only one another."
IfAxyis understood to mean "xadmiresy," and theuniverse of discourseis the set of all critics, then a reasonabletranslation of the sentenceinto second order logic is:∃X((∃x¬Xx)∧∃x,y(Xx∧Xy∧Axy)∧∀x∀y(Xx∧Axy→Xy)){\displaystyle \exists X{\big (}(\exists x\neg Xx)\land \exists x,y(Xx\land Xy\land Axy)\land \forall x\,\forall y(Xx\land Axy\rightarrow Xy){\big )}}In words, this states that there exists a collection of critics with the following properties: The collection forms a proper subclass of all the critics; it is inhabited (and thus non-empty) by a member that admires a critic that is also a member; and it is such that if any of its members admires anyone, then the latter is necessarily also a member.
That this formula has no first-order equivalent can be seen by turning it into a formula in the language of arithmetic. To this end, substitute the formula(y=x+1∨x=y+1){\textstyle (y=x+1\lor x=y+1)}forAxy. This expresses that the two terms are successors of one another, in some way. The resulting proposition,∃X((∃x¬Xx)∧∃x,y(Xx∧Xy∧(y=x+1∨x=y+1))∧∀x∀y(Xx∧(y=x+1∨x=y+1)→Xy)){\displaystyle \exists X{\big (}(\exists x\neg Xx)\land \exists x,y(Xx\land Xy\land (y=x+1\lor x=y+1))\land \forall x\,\forall y(Xx\land (y=x+1\lor x=y+1)\rightarrow Xy){\big )}}states that there is a setXwith the following three properties:
Recall a model of a formal theory of arithmetic, such asfirst-order Peano arithmetic, is calledstandardif itonlycontains the familiar natural numbers as elements (i.e.,0, 1, 2, ...). The model is callednon-standardotherwise. The formula above is true only in non-standard models: In the standard modelXwould be a proper subset of all numbers that also would have to contain all available numbers (0, 1, 2, ...), and so it fails. And then on the other hand, in every non-standard model there is a subsetXsatisfying the formula.
Let us now assume that there is a first-order rendering of the above formula calledE. If¬E{\displaystyle \neg E}were added to the Peano axioms, it would mean that there were no non-standard models of the augmented axioms. However, the usual argument for theexistence of non-standard modelswould still go through, proving that there are non-standard models after all. This is a contradiction, so we can conclude that no such formulaEexists in first-order logic.
There is no formulaAinfirst-order logic with equalitywhich is true of all and only models with finite domains. In other words, there is no first-order formula which can express "there is only a finite number of things".
This is implied by thecompactness theoremas follows.[2]Suppose there is a formulaAwhich is true in all and only models with finite domains. We can express, for any positive integern, the sentence "there are at leastnelements in the domain". For a givenn, call the formula expressing that there are at leastnelementsBn. For example, the formulaB3is:∃x∃y∃z(x≠y∧x≠z∧y≠z){\displaystyle \exists x\exists y\exists z(x\neq y\wedge x\neq z\wedge y\neq z)}which expresses that there are at least three distinct elements in the domain. Consider the infinite set of formulaeA,B2,B3,B4,…{\displaystyle A,B_{2},B_{3},B_{4},\ldots }Every finite subset of these formulae has a model: given a subset, find the greatestnfor which the formulaBnis in the subset. Then a model with a domain containingnelements will satisfyA(because the domain is finite) and all theBformulae in the subset. Applying the compactness theorem, the entire infinite set must also have a model. Because of what we assumed aboutA, the model must be finite. However, this model cannot be finite, because if the model has onlymelements, it does not satisfy the formulaBm+1. This contradiction shows that there can be no formulaAwith the property we assumed.
|
https://en.wikipedia.org/wiki/Nonfirstorderizability
|
Reification(also known asconcretism,hypostatization, orthe fallacy of misplaced concreteness) is afallacyofambiguity, when anabstraction(abstractbeliefor hypotheticalconstruct) is treated as if it were a concrete real event or physical entity.[1][2]In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory".
Reification is part of normal usage ofnatural language, as well as ofliterature, where a reified abstraction is intended as afigure of speech, and actually understood as such. But the use of reification in logicalreasoningorrhetoricis misleading and usually regarded as a fallacy.[3]
A potential consequence of reification is exemplified byGoodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself.
The term "reification" originates from the combination of theLatintermsres("thing") and -fication, a suffix related tofacere("to make").[4]Thusreificationcan be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object.
Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will".[5]
Reification may derive from an innate tendency to simplify experience by assuming constancy as much as possible.[6]
According toAlfred North Whitehead, one commits thefallacy of misplaced concretenesswhen one mistakes an abstractbelief,opinion, orconceptabout the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness.'"[7]Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in theuniversecan be ascribed a simple spatial or temporalextension, that is, without reference to its relations to other spatial or temporal extensions.
[...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructiveabstractionwe can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.[8]
William Jamesused the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticizeImmanuel Kant's andGeorg Wilhelm Friedrich Hegel's idealistic philosophies. InThe Meaning of Truth, James wrote:
Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ...The viciously privative employment of abstract characters and class namesis, I am persuaded, one of the great original sins of the rationalistic mind.[9]
In a chapter on "The Methods and Snares of Psychology" inThe Principles of Psychology, James describes a related fallacy,thepsychologist's fallacy,thus: "Thegreatsnare of the psychologist is theconfusion of his own standpoint with that of the mental factabout which he is making his report. I shall hereafter call this the "’psychologist's fallacy’par excellence" (volume 1, p. 196).John Deweyfollowed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition".[10]
The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts ofmotivationin psychology,utilityin economics, andgravitational fieldin physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena.
The degree to which a construct is useful and accepted as part of the currentparadigmin a scientific community depends on empirical research that has demonstrated that a scientific construct hasconstruct validity(especially,predictive validity).[11]
Stephen Jay Goulddraws heavily on the idea of fallacy of reification in his bookThe Mismeasure of Man. He argues that the error in usingintelligence quotientscores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence."[12]
Pathetic fallacy(also known as anthropomorphic fallacy oranthropomorphization) is a specific type[dubious–discuss]of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings.[13]Pathetic fallacy is also related topersonification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.
Theanimistic fallacyinvolves attributing personal intention to an event or situation.
Reification fallacy should not be confused with other fallacies of ambiguity:
Therhetoricaldevices ofmetaphorandpersonificationexpress a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as thepathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric.[2]
Reification, while usually fallacious, is sometimes considered a valid argument.Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders,preferred numbers, and many others.[14](Compare the theory ofsocial constructionism.)
|
https://en.wikipedia.org/wiki/Reification_(fallacy)
|
Reificationinknowledge representationis the process of turning apredicate[1]or statement[2]into an addressable object. Reification allows the representation of assertions so that they can be referred to or qualified byotherassertions, i.e., meta-knowledge.[3]
The message "John is six feet tall" is an assertion involving truth that commits the speaker to its factuality, whereas the reified statement "Mary reports that John is six feet tall" defers such commitment to Mary. In this way, the statements can be incompatible without creating contradictions inreasoning. For example, the statements "John is six feet tall" and "John is five feet tall" are mutually exclusive (and thus incompatible), but the statements "Mary reports that John is six feet tall" and "Paul reports that John is five feet tall" are not incompatible, as they are both governed by a conclusive rationale that either Mary or Paul is (or both are), in fact, incorrect.
Inlinguistics, reporting, telling, and saying are recognized asverbal processes that project a wording (or locution). If a person says that "Paul told x" and "Mary told y", this person stated only that the telling took place. In this case, the person who made these two statements did not represent a person inconsistently. In addition, if two people are talking to each other, let's say Paul and Mary, and Paul tells Mary "John is five feet tall" and Mary rejects Paul's statement by saying "No, he is actually six feet tall", the socially constructed model of John does not become inconsistent. The reason for that is that statements are to be understood as an attempt to convince the addressee of something (Austin's How to do things with words), alternatively as a request to add some attribute to the model of Paul. The response to a statement can be an acknowledgement, in which case the model is changed, or it can be a statement rejection, in which case the model does not get changed. Finally, the example above for which John is said to be "five feet tall" or "six feet tall" is only incompatible because John can only be a single number of feet tall. If the attribute were a possession as in "he has a dog" or "he also has a cat", a model inconsistency would not happen. In other words, the issue of model inconsistency has to do with our model of the domain element (John) and not with the ascription of different range elements (measurements such as "five feet tall" or "six feet tall").
|
https://en.wikipedia.org/wiki/Reification_(knowledge_representation)
|
Computational audiologyis a branch ofaudiologythat employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.[1]
In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments representtranslationalwork that supports this aim.[2]
For people with hearing difficulties,tinnitus,hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses ande-Health/mHealthapps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway.[3]
The field is interdisciplinary and includes foundations inaudiology,auditory neuroscience,computer science,data science,machine learning,psychology,signal processing,natural language processing, otology andvestibulology.
In computational audiology,modelsandalgorithmsare used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.
For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the humanauditory system. Examples include models of the mechanical properties of thebasilar membrane,[4]the electrically stimulatedcochlea,[5][6]middle ear mechanics,[7]bone conduction,[8]and the central auditory pathway.[9]Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models.[10]More recently,convolutional neural networks(CNNs) have been constructed and trained that can replicate human auditory function[11]or complex cochlear mechanics with high accuracy.[12]Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.
Online pure-tone thresholdaudiometry(or screening) tests, electrophysiological measures, for example distortion-productotoacoustic emissions(DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects ofototoxicityand/or noise, and guide ear and hearing care decisions and provide support to clinicians.[13][14]Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning.[15]Smartphone attachments have also been designed to performtympanometryfor acoustic evaluation of themiddle eareardrum.[16][17]Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening.[18][19]
Collectinglarge numbersof audiograms (e.g. from databases from theNational Institute for Occupational Safety and Healthor NIOSH[20]orNational Health and Nutrition Examination Surveyor NHANES) provides researchers with opportunities to find patterns of hearing status in the population[21][22]or to trainAIsystems that can classify audiograms.[23]Machine learningcan be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss[24]or the relationship between genetic profile and self-reported hearing loss.[25]Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user.[26]
Methods to improve rehabilitation by auditory implants include improving music perception,[27]models of the electrode-neuron interface,[28]and an AI basedCochlear Implantfitting assistant.[29]
Online surveys processed withML-based classification have been used to diagnose somatosensory tinnitus.[30]AutomatedNatural Language Processing(NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms.[31][32]
Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile.[33][34]Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours,[35]have been created.
Examples of e-Research tools include including the Remote Testing Wiki,[36]the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online.[37]
Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement.[38]
Computational biology,computational medicine, andcomputational pathologyare all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such asmathematicsandinformation science.
|
https://en.wikipedia.org/wiki/Computational_audiology
|
Neurocomputational speech processingis computer-simulation ofspeech productionandspeech perceptionby referring to the natural neuronal processes ofspeech productionandspeech perception, as they occur in the humannervous system(central nervous systemandperipheral nervous system). This topic is based onneuroscienceandcomputational neuroscience.[1]
Neurocomputational models of speech processing are complex. They comprise at least acognitive part, amotor partand asensory part.[2]
The cognitive or linguistic part of a neurocomputational model of speech processing comprises the neural activation or generation of aphonemic representationon the side ofspeech production(e.g. neurocomputational and extended version of the Levelt model developed by Ardi Roelofs:[3]WEAVER++[4]as well as the neural activation or generation of an intention or meaning on the side ofspeech perceptionorspeech comprehension.
Themotor partof a neurocomputational model of speech processing starts with aphonemic representationof a speech item, activates a motor plan and ends with thearticulationof that particular speech item (see also:articulatory phonetics).
Thesensory partof a neurocomputational model of speech processing starts with an acoustic signal of a speech item (acoustic speech signal), generates anauditory representationfor that signal and activates aphonemic representationsfor that speech item.
Neurocomputational speech processing is speech processing byartificial neural networks. Neural maps, mappings and pathways as described below, are model structures, i.e. important structures within artificial neural networks.
An artificial neural network can be separated in three types of neural maps, also called "layers":
The term "neural map" is favoured here over the term "neural layer", because a cortical neural map should be modeled as a 2D-map of interconnected neurons (e.g. like aself-organizing map; see also Fig. 1). Thus, each "model neuron" or "artificial neuron" within this 2D-map is physiologically represented by acortical columnsince thecerebral cortexanatomically exhibits a layered structure.
A neural representation within anartificial neural networkis a temporarily activated (neural) state within a specific neural map. Each neural state is represented by a specific neural activation pattern. This activation pattern changes during speech processing (e.g. from syllable to syllable).
In the ACT model (see below), it is assumed that an auditory state can be represented by a "neuralspectrogram" (see Fig. 2) within an auditory state map. This auditory state map is assumed to be located in the auditory association cortex (seecerebral cortex).
A somatosensory state can be divided in atactileandproprioceptive stateand can be represented by a specific neural activation pattern within the somatosensory state map. This state map is assumed to be located in the somatosensory association cortex (seecerebral cortex,somatosensory system,somatosensory cortex).
A motor plan state can be assumed for representing a motor plan, i.e. the planning of speech articulation for a specific syllable or for a longer speech item (e.g. word, short phrase). This state map is assumed to be located in thepremotor cortex, while the instantaneous (or lower level) activation of each speech articulator occurs within theprimary motor cortex(seemotor cortex).
The neural representations occurring in the sensory and motor maps (as introduced above) are distributed representations (Hinton et al. 1968[5]): Each neuron within the sensory or motor map is more or less activated, leading to a specific activation pattern.
The neural representation for speech units occurring in the speech sound map (see below: DIVA model) is a punctual or local representation. Each speech item or speech unit is represented here by a specificneuron(model cell, see below).
A neural mapping connects two cortical neural maps. Neural mappings (in contrast to neural pathways) store training information by adjusting their neural link weights (seeartificial neuron,artificial neural networks). Neural mappings are capable of generating or activating a distributed representation (see above) of a sensory or motor state within a sensory or motor map from a punctual or local activation within the other map (see for example the synaptic projection from speech sound map to motor map, to auditory target region map, or to somatosensory target region map in the DIVA model, explained below; or see for example the neural mapping from phonetic map to auditory state map and motor plan state map in the ACT model, explained below and Fig. 3).
Neural mapping between two neural maps are compact or dense: Each neuron of one neural map is interconnected with (nearly) each neuron of the other neural map (many-to-many-connection, seeartificial neural networks). Because of this density criterion for neural mappings, neural maps which are interconnected by a neural mapping are not far apart from each other.
In contrast to neural mappingsneural pathwayscan connect neural maps which are far apart (e.g. in different cortical lobes, seecerebral cortex). From the functional or modeling viewpoint, neural pathways mainly forward information without processing this information. A neural pathway in comparison to a neural mapping need much less neural connections. A neural pathway can be modelled by using a one-to-one connection of the neurons of both neural maps (seetopographic mappingand seesomatotopic arrangement).
Example: In the case of two neural maps, each comprising 1,000 model neurons, a neural mapping needs up to 1,000,000 neural connections (many-to-many-connection), while only 1,000 connections are needed in the case of a neural pathway connection.
Furthermore, the link weights of the connections within a neural mapping are adjusted during training, while the neural connections in the case of a neural pathway need not to be trained (each connection is maximal exhibitory).
The leading approach in neurocomputational modeling of speech production is the DIVA model developed byFrank H. Guentherand his group at Boston University.[6][7][8][9]The model accounts for a wide range ofphoneticandneuroimagingdata but - like each neurocomputational model - remains speculative to some extent.
The organization or structure of the DIVA model is shown in Fig. 4.
The speech sound map - assumed to be located in the inferior and posterior portion ofBroca's area(left frontal operculum) - represents (phonologically specified) language-specific speech units (sounds, syllables, words, short phrases). Each speech unit (mainly syllables; e.g. the syllable and word "palm" /pam/, the syllables /pa/, /ta/, /ka/, ...) is represented by a specific model cell within the speech sound map (i.e. punctual neural representations, see above). Each model cell (seeartificial neuron) corresponds to a small population of neurons which are located at close range and which fire together.
Each neuron (model cell,artificial neuron) within the speech sound map can be activated and subsequently activates a forward motor command towards the motor map, called articulatory velocity and position map. The activated neural representation on the level of that motor map determines the articulation of a speech unit, i.e. controls all articulators (lips, tongue, velum, glottis) during the time interval for producing that speech unit. Forward control also involves subcortical structures like thecerebellum, not modelled in detail here.
A speechunitrepresents an amount of speechitemswhich can be assigned to the same phonemic category. Thus, each speech unit is represented by one specific neuron within the speech sound map, while the realization of a speech unit may exhibit some articulatory and acoustic variability. This phonetic variability is the motivation to define sensory targetregionsin the DIVA model (see Guenther et al. 1998).[10]
The activation pattern within the motor map determines the movement pattern of all model articulators (lips, tongue, velum, glottis) for a speech item. In order not to overload the model, no detailed modeling of theneuromuscular systemis done. TheMaeda articulatory speech synthesizeris used in order to generate articulator movements, which allows the generation of a time-varyingvocal tract formand the generation of theacoustic speech signalfor each particular speech item.
In terms ofartificial intelligencethe articulatory model can be called plant (i.e. the system, which is controlled by the brain); it represents a part of theembodimentof the neuronal speech processing system. The articulatory model generatessensory outputwhich is the basis for generating feedback information for the DIVA model (see below: feedback control).
On the one hand the articulatory model generatessensory information, i.e. an auditory state for each speech unit which is neurally represented within the auditory state map (distributed representation), and a somatosensory state for each speech unit which is neurally represented within the somatosensory state map (distributed representation as well). The auditory state map is assumed to be located in thesuperior temporal cortexwhile the somatosensory state map is assumed to be located in theinferior parietal cortex.
On the other hand, the speech sound map, if activated for a specific speech unit (single neuron activation; punctual activation), activates sensory information by synaptic projections between speech sound map and auditory target region map and between speech sound map and somatosensory target region map. Auditory and somatosensory target regions are assumed to be located inhigher-order auditory cortical regionsand inhigher-order somatosensory cortical regionsrespectively. These target region sensory activation patterns - which exist for each speech unit - are learned duringspeech acquisition(by imitation training; see below: learning).
Consequently, two types of sensory information are available if a speech unit is activated at the level of the speech sound map: (i) learned sensory target regions (i.e.intendedsensory state for a speech unit) and (ii) sensory state activation patterns resulting from a possibly imperfect execution (articulation) of a specific speech unit (i.e.currentsensory state, reflecting the current production and articulation of that particular speech unit). Both types of sensory information is projected to sensory error maps, i.e. to an auditory error map which is assumed to be located in thesuperior temporal cortex(like the auditory state map) and to a somatosensory error map which is assumed to be located in theinferior parietal cortex(like the somatosensory state map) (see Fig. 4).
If the current sensory state deviates from the intended sensory state, both error maps are generating feedback commands which are projected towards the motor map and which are capable to correct the motor activation pattern and subsequently the articulation of a speech unit under production. Thus, in total, the activation pattern of the motor map is not only influenced by a specific feedforward command learned for a speech unit (and generated by the synaptic projection from the speech sound map) but also by a feedback command generated at the level of the sensory error maps (see Fig. 4).
While thestructureof a neuroscientific model of speech processing (given in Fig. 4 for the DIVA model) is mainly determined byevolutionary processes, the (language-specific)knowledgeas well as the (language-specific)speaking skillsare learned and trained duringspeech acquisition. In the case of the DIVA model it is assumed that the newborn has not available an already structured (language-specific) speech sound map; i.e. no neuron within the speech sound map is related to any speech unit. Rather the organization of the speech sound map as well as the tuning of the projections to the motor map and to the sensory target region maps is learned or trained during speech acquisition. Two important phases of early speech acquisition are modeled in the DIVA approach: Learning bybabblingand byimitation.
Duringbabblingthe synaptic projections between sensory error maps and motor map are tuned. This training is done by generating an amount of semi-random feedforward commands, i.e. the DIVA model "babbles". Each of these babbling commands leads to the production of an "articulatory item", also labeled as "pre-linguistic (i.e. non language-specific) speech item" (i.e. the articulatory model generates an articulatory movement pattern on the basis of the babbling motor command). Subsequently, an acoustic signal is generated.
On the basis of the articulatory and acoustic signal, a specific auditory and somatosensory state pattern is activated at the level of the sensory state maps (see Fig. 4) for each (pre-linguistic) speech item. At this point the DIVA model has available the sensory and associated motor activation pattern for different speech items, which enables the model to tune the synaptic projections between sensory error maps and motor map. Thus, during babbling the DIVA model learns feedback commands (i.e. how to produce a proper (feedback) motor command for a specific sensory input).
Duringimitationthe DIVA model organizes its speech sound map and tunes the synaptic projections between speech sound map and motor map - i.e. tuning of forward motor commands - as well as the synaptic projections between speech sound map and sensory target regions (see Fig. 4). Imitation training is done by exposing the model to an amount of acoustic speech signals representing realizations of language-specific speech units (e.g. isolated speech sounds, syllables, words, short phrases).
The tuning of the synaptic projections between speech sound map and auditory target region map is accomplished by assigning one neuron of the speech sound map to the phonemic representation of that speech item and by associating it with the auditory representation of that speech item, which is activated at the auditory target region map. Auditoryregions(i.e. a specification of the auditory variability of a speech unit) occur, because one specific speech unit (i.e. one specific phonemic representation) can be realized by several (slightly) different acoustic (auditory) realizations (for the difference between speechitemand speechunitsee above: feedforward control) .
The tuning of the synaptic projections between speech sound map and motor map (i.e. tuning of forward motor commands) is accomplished with the aid of feedback commands, since the projections between sensory error maps and motor map were already tuned during babbling training (see above). Thus the DIVA model tries to "imitate" an auditory speech item by attempting to find a proper feedforward motor command. Subsequently, the model compares the resulting sensory output (currentsensory state following the articulation of that attempt) with the already learned auditory target region (intendedsensory state) for that speech item. Then the model updates the current feedforward motor command by the current feedback motor command generated from the auditory error map of the auditory feedback system. This process may be repeated several times (several attempts). The DIVA model is capable of producing the speech item with a decreasing auditory difference between current and intended auditory state from attempt to attempt.
During imitation the DIVA model is also capable of tuning the synaptic projections from speech sound map to somatosensory target region map, since each new imitation attempt produces a new articulation of the speech item and thus produces asomatosensorystate pattern which is associated with the phonemic representation of that speech item.
While auditory feedback is most important during speech acquisition, it may be activated less if the model has learned a proper feedforward motor command for each speech unit. But it has been shown that auditory feedback needs to be strongly coactivated in the case of auditory perturbation (e.g. shifting a formant frequency, Tourville et al. 2005).[11]This is comparable to the strong influence of visual feedback on reaching movements during visual perturbation (e.g. shifting the location of objects by viewing through aprism).
In a comparable way to auditory feedback, also somatosensory feedback can be strongly coactivated during speech production, e.g. in the case of unexpected blocking of the jaw (Tourville et al. 2005).
A further approach in neurocomputational modeling of speech processing is the ACT model developed byBernd J. Krögerand his group[12]atRWTH Aachen University, Germany (Kröger et al. 2014,[13]Kröger et al. 2009,[14]Kröger et al. 2011[15]). The ACT model is in accord with the DIVA model in large parts. The ACT model focuses on the "actionrepository" (i.e.repositoryforsensorimotor speaking skills, comparable to the mental syllabary, see Levelt and Wheeldon 1994[16]), which is not spelled out in detail in the DIVA model. Moreover, the ACT model explicitly introduces a level ofmotor plans, i.e. a high-level motor description for the production of speech items (seemotor goals,motor cortex). The ACT model - like any neurocomputational model - remains speculative to some extent.
The organization or structure of the ACT model is given in Fig. 5.
Forspeech production, the ACT model starts with the activation of aphonemic representationof a speech item (phonemic map). In the case of afrequentsyllable, a co-activation occurs at the level of thephonetic map, leading to a further co-activation of the intended sensory state at the level of thesensory state mapsand to a co-activation of amotor plan stateat the level of the motor plan map. In the case of aninfrequent syllable, an attempt for amotor planis generated by the motor planning module for that speech item by activating motor plans for phonetic similar speech items via the phonetic map (see Kröger et al. 2011[17]). Themotor planor vocal tract action score comprises temporally overlapping vocal tract actions, which are programmed and subsequently executed by themotor programming, execution, and control module. This module gets real-time somatosensory feedback information for controlling the correct execution of the (intended) motor plan.Motor programingleads to activation pattern at the level of theprimary motor mapand subsequently activatesneuromuscular processing.Motoneuron activation patternsgeneratemuscle forcesand subsequently movement patterns of allmodel articulators(lips, tongue, velum, glottis). TheBirkholz 3D articulatory synthesizeris used in order to generate theacoustic speech signal.
Articulatoryandacousticfeedback signals are used for generatingsomatosensoryandauditory feedback informationvia the sensory preprocessing modules, which is forwarded towards the auditory and somatosensory map. At the level of the sensory-phonetic processing modules, auditory and somatosensory information is stored inshort-term memoryand the external sensory signal (ES, Fig. 5, which are activated via the sensory feedback loop) can be compared with the already trained sensory signals (TS, Fig. 5, which are activated via the phonetic map). Auditory and somatosensory error signals can be generated if external and intended (trained) sensory signals are noticeably different (cf. DIVA model).
The light green area in Fig. 5 indicates those neural maps and processing modules, which process asyllableas a whole unit (specific processing time window around 100 ms and more). This processing comprises the phonetic map and the directly connected sensory state maps within the sensory-phonetic processing modules and the directly connected motor plan state map, while the primary motor map as well as the (primary) auditory and (primary) somatosensory map process smaller time windows (around 10 ms in the ACT model).
The hypotheticalcortical locationof neural maps within the ACT model is shown in Fig. 6. The hypothetical locations of primary motor and primary sensory maps are given in magenta, the hypothetical locations of motor plan state map and sensory state maps (within sensory-phonetic processing module, comparable to the error maps in DIVA) are given in orange, and the hypothetical locations for themirroredphonetic map is given in red. Double arrows indicate neuronal mappings. Neural mappings connect neural maps, which are not far apart from each other (see above). The twomirroredlocations of the phonetic map are connected via a neural pathway (see above), leading to a (simple) one-to-one mirroring of the current activation pattern for both realizations of the phonetic map. This neural pathway between the two locations of the phonetic map is assumed to be a part of thefasciculus arcuatus(AF, see Fig. 5 and Fig. 6).
Forspeech perception, the model starts with an external acoustic signal (e.g. produced by an external speaker). This signal is preprocessed, passes the auditory map, and leads to an activation pattern for each syllable or word on the level of the auditory-phonetic processing module (ES: external signal, see Fig. 5). The ventral path of speech perception (see Hickok and Poeppel 2007[18]) would directly activate a lexical item, but is not implemented in ACT. Rather, in ACT the activation of a phonemic state occurs via the phonemic map and thus may lead to a coactivation of motor representations for that speech item (i.e. dorsal pathway of speech perception; ibid.).
The phonetic map together with the motor plan state map, sensory state maps (occurring within the sensory-phonetic processing modules), and phonemic (state) map form the action repository. The phonetic map is implemented in ACT as aself-organizing neural mapand different speech items are represented by different neurons within this map (punctual or local representation, see above: neural representations). The phonetic map exhibits three major characteristics:
The phonetic map implements theaction-perception-linkwithin the ACT model (see also Fig. 5 and Fig. 6: the dual neural representation of the phonetic map in thefrontal lobeand at the intersection oftemporal lobeandparietal lobe).
A motor plan is a high level motor description for the production and articulation of a speech items (seemotor goals,motor skills,articulatory phonetics,articulatory phonology). In our neurocomputational model ACT a motor plan is quantified as a vocal tract action score. Vocal tract action scores quantitatively determine the number of vocal tract actions (also called articulatory gestures), which need to be activated in order to produce a speech item, their degree of realization and duration, and the temporal organization of all vocal tract actions building up a speech item (for a detailed description of vocal tract actions scores see e.g. Kröger & Birkholz 2007).[19]The detailed realization of each vocal tract action (articulatory gesture) depends on the temporal organization of all vocal tract actions building up a speech item and especially on their temporal overlap. Thus the detailed realization of each vocal tract action within a speech item is specified below the motor plan level in our neurocomputational model ACT (see Kröger et al. 2011).[20]
A severe problem of phonetic or sensorimotor models of speech processing (like DIVA or ACT) is that the development of thephonemic mapduring speech acquisition is not modeled. A possible solution of this problem could be a direct coupling of action repository and mental lexicon without explicitly introducing a phonemic map at the beginning of speech acquisition (even at the beginning of imitation training; see Kröger et al. 2011 PALADYN Journal of Behavioral Robotics).
A very important issue for all neuroscientific or neurocomputational approaches is to separate structure and knowledge. While the structure of the model (i.e. of the human neuronal network, which is needed for processing speech) is mainly determined byevolutionary processes, the knowledge is gathered mainly duringspeech acquisitionby processes oflearning. Different learning experiments were carried out with the model ACT in order to learn (i) a five-vowel system /i, e, a, o, u/ (see Kröger et al. 2009), (ii) a small consonant system (voiced plosives /b, d, g/ in combination with all five vowels acquired earlier as CV syllables (ibid.), (iii) a small model language comprising the five-vowel system, voiced and unvoiced plosives /b, d, g, p, t, k/, nasals /m, n/ and the lateral /l/ and three syllable types (V, CV, and CCV) (see Kröger et al. 2011)[21]and (iv) the 200 most frequent syllables of Standard German for a 6-year-old child (see Kröger et al. 2011).[22]In all cases, an ordering of phonetic items with respect to different phonetic features can be observed.
Despite the fact that the ACT model in its earlier versions was designed as a pure speech production model (including speech acquisition), the model is capable of exhibiting important basic phenomena of speech perception, i.e. categorical perception and the McGurk effect. In the case ofcategorical perception, the model is able to exhibit that categorical perception is stronger in the case of plosives than in the case of vowels (see Kröger et al. 2009). Furthermore, the model ACT was able to exhibit theMcGurk effect, if a specific mechanism of inhibition of neurons of the level of the phonetic map was implemented (see Kröger and Kannampuzha 2008).[23]
|
https://en.wikipedia.org/wiki/Neurocomputational_speech_processing
|
Speech codingis an application ofdata compressiontodigital audiosignals containingspeech. Speech coding uses speech-specificparameter estimationusingaudio signal processingtechniques to model the speech signal, combined with generic data compression algorithms to represent the resulting modeled parameters in a compact bitstream.[1]
Common applications of speech coding aremobile telephonyandvoice over IP(VoIP).[2]The most widely used speech coding technique in mobile telephony islinear predictive coding(LPC), while the most widely used in VoIP applications are the LPC andmodified discrete cosine transform(MDCT) techniques.[citation needed]
The techniques employed in speech coding are similar to those used inaudio data compressionandaudio codingwhere appreciation ofpsychoacousticsis used to transmit only data that is relevant to the human auditory system. For example, invoicebandspeech coding, only information in the frequency band 400 to 3500 Hz is transmitted but the reconstructed signal retains adequateintelligibility.
Speech coding differs from other forms of audio coding in that speech is a simpler signal than other audio signals, and statistical information is available about the properties of speech. As a result, some auditory information that is relevant in general audio coding can be unnecessary in the speech coding context. Speech coding stresses the preservation of intelligibility andpleasantnessof speech while using a constrained amount of transmitted data.[3]In addition, most speech applications require low coding delay, aslatencyinterferes with speech interaction.[4]
Speech coders are of two classes:[5]
TheA-lawandμ-law algorithmsused inG.711PCMdigital telephonycan be seen as an earlier precursor of speech encoding, requiring only 8 bits per sample but giving effectively 12bits of resolution.[7]Logarithmic companding are consistent with human hearing perception in that a low-amplitude noise is heard along a low-amplitude speech signal but is masked by a high-amplitude one. Although this would generate unacceptable distortion in a music signal, the peaky nature of speech waveforms, combined with the simple frequency structure of speech as aperiodic waveformhaving a singlefundamental frequencywith occasional added noise bursts, make these very simple instantaneous compression algorithms acceptable for speech.[citation needed][dubious–discuss]
A wide variety of other algorithms were tried at the time, mostlydelta modulationvariants, but after careful consideration, the A-law/μ-law algorithms were chosen by the designers of the early digital telephony systems. At the time of their design, their 33% bandwidth reduction for a very low complexity made an excellent engineering compromise. Their audio performance remains acceptable, and there was no need to replace them in the stationary phone network.[citation needed]
In 2008,G.711.1codec, which has a scalable structure, was standardized by ITU-T. The input sampling rate is 16 kHz.[8]
Much of the later work in speech compression was motivated by military research into digital communications forsecure military radios, where very low data rates were used to achieve effective operation in a hostile radio environment. At the same time, far moreprocessing powerwas available, in the form ofVLSI circuits, than was available for earlier compression techniques. As a result, modern speech compression algorithms could use far more complex techniques than were available in the 1960s to achieve far higher compression ratios.
The most widely used speech coding algorithms are based onlinear predictive coding(LPC).[9]In particular, the most common speech coding scheme is the LPC-basedcode-excited linear prediction(CELP) coding, which is used for example in theGSMstandard. In CELP, the modeling is divided in two stages, alinear predictivestage that models the spectral envelope and a code-book-based model of the residual of the linear predictive model. In CELP, linear prediction coefficients (LPC) are computed and quantized, usually asline spectral pairs(LSPs). In addition to the actual speech coding of the signal, it is often necessary to usechannel codingfor transmission, to avoid losses due to transmission errors. In order to get the best overall coding results, speech coding and channel coding methods are chosen in pairs, with the more important bits in the speech data stream protected by more robust channel coding.
Themodified discrete cosine transform(MDCT) is used in the LD-MDCT technique used by theAAC-LDformat introduced in 1999.[10]MDCT has since been widely adopted invoice-over-IP(VoIP) applications, such as theG.729.1wideband audiocodec introduced in 2006,[11]Apple'sFaceTime(using AAC-LD) introduced in 2010,[12]and theCELTcodec introduced in 2011.[13]
Opusis afree softwareaudio coder. It combines the speech-oriented LPC-basedSILKalgorithm and the lower-latency MDCT-based CELT algorithm, switching between or combining them as needed for maximal efficiency.[14][15]It is widely used for VoIP calls inWhatsApp.[16][17][18]ThePlayStation 4video game console also uses Opus for itsPlayStation Networksystem party chat.[19]
A number of codecs with even lowerbit rateshave been demonstrated.Codec2, which operates at bit rates as low as450 bit/s, sees use in amateur radio.[20]NATO currently usesMELPe, offering intelligible speech at600 bit/sand below.[21]Neural vocoder approaches have also emerged:Lyraby Google gives an "almost eerie" quality at3 kbit/s.[22]Microsoft'sSatinalso uses machine learning, but uses a higher tunable bitrate and is wideband.[23]
|
https://en.wikipedia.org/wiki/Speech_coding
|
Speech technologyrelates to the technologies designed to duplicate and respond to thehuman voice. They have many uses. These include aid to the voice-disabled, the hearing-disabled, and the blind, along with communication with computers without a keyboard. They enhance game software and aid in marketing goods or services by telephone.
The subject includes several subfields:
This technology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Speech_technology
|
Camera-readyis a common term used in the commercial printing industry meaning that a document is, from a technical standpoint, ready to "go to press", or be printed.
The term camera-ready was first used in the photooffset printingprocess, where the final layout of a document was attached to a "mechanical" or "paste up". Then, astat camerawas used to photograph the mechanical, and the final offset printing plates were created from the camera's negative.
In this system, a final paste-up that needed no further changes or additions was ready to be photographed by the process camera and subsequently printed. This final document wascamera-ready.
This artwork may have looked messy to the naked eye or to a modern consumer digital camera - covered in various pieces of paper attached with adhesive or wax and composited with white out, gouache, tape and blue pencil (non-reproducible) and red masking film, black or clay-based paint (reproducible as black) - but appeared perfectly uniform to the monochrome reproducing camera used for print reproduction.
In recent years, the use of paste-ups has been steadily replaced bydesktop publishingsoftware, which allows users to create entire document layouts on the computer. In the meantime, many printers now use technology to take these digital files and create printing plates from them without use of a camera and negative. Despite this, the termcamera-readycontinues to be used to signify that a document is ready to be made into a printing plate.
In this new digital-to-plate system, a digital file is usually consideredcamera-readyif it meets several conditions:
|
https://en.wikipedia.org/wiki/Camera-ready
|
Etaoin shrdlu(/ˈɛti.ɔɪnˈʃɜːrdluː/,[1]/ˈeɪtɑːnʃrədˈluː/)[2]is a nonsense phrase that sometimes appeared by accident in print in the days ofhot typepublishing, resulting from a custom oftype-casting machineoperators filling out and discarding lines of type when an error was made. It appeared often enough to become part of newspaper lore – a documentary about the last issue ofThe New York Timescomposed using hot metal (July 2, 1978) was titledFarewell, Etaoin Shrdlu.[3]The phraseetaoin shrdluis listed in theOxford English Dictionaryand in theRandom House Webster's Unabridged Dictionary.
The letters in the string are, approximately, the twelve most commonly used letters in the English language; differing sources do give slightly different results but one well-known sequence is ETAOINS RHLDCUM,ordered by their frequency.[4]
The letters ontype-casting machinekeyboards (such asLinotypeandIntertype) were arranged by descendingletter frequencyto speed up the mechanical operation of the machine, so lower-casee-t-a-o-i-nands-h-r-d-l-uwere the first two columns on the left side of the keyboard.
Each key would cause a brassmatrix(an individual letter mold) from the corresponding slot in a font magazine to drop and be added to a line mold. After a line had been cast, the constituent matrices of its mold were returned to the font magazine.
If a mistake was made, the line could theoretically be corrected by hand in the assembler area. However, manipulating the matrices by hand within the partially assembled line was time-consuming and presented the chance of disturbing important adjustments. It was much quicker to fill out the bad line and discard the resulting line of text than it was to redo it properly.
To make the line long enough to proceed through the machine, operators would finish it by running a finger down the first columns of the keyboard, which created a pattern that could be easily noticed by proofreaders. Occasionally such a line would be overlooked and make its way into print.
The phrase has gained enough notability to appear outside typography, including:
|
https://en.wikipedia.org/wiki/Etaoin_shrdlu
|
Inprintingandpublishing,proofsare the preliminary versions of publications meant for review by authors, editors, and proofreaders, often with extra-wide margins.Galley proofsmay be uncut andunbound, or in some caseselectronically transmitted. They are created forproofreadingandcopyeditingpurposes, but may also be used for promotional and review purposes.[1][2][3]
Proof, in thetypographicalsense, is a term that dates to around 1600.[4]The primary goal of proofing is to create a tool for verification that the job is accurate separate from the pages produced on the press. All needed or suggested changes are physically marked on paper proofs or electronically marked on electronic proofs by the author, editor, and proofreaders. Thecompositor, typesetter, or printer receives the edited copies, corrects and re-arranges the type or the pagination, and arranges for the press workers to print the final or published copies.
Galley proofsorgalleysare so named because in the days of hand-setletterpress printingin the 1650s, the printer would set the page into "galleys", metal trays into which type was laid and tightened into place.[5]A small proof press would then be used to print a limited number of copies forproofreading.[5]Galley proofs are thus, historically speaking, galleys printed on a proof press.
From the printer's point of view, the galley proof, as it originated during the era of hand-set physical type, had two primary purposes, those being to check that the compositor had set the copy accurately (because sometimes individual pieces of type did get put in the wrong case after use) and that the type was free of defects (because type metal is comparatively soft, so type can get damaged).
Once a defect-free galley proof was produced, the publishing house requested a number of galley proofs to be run off for distribution to editors and authors for a final reading and corrections to the text before the type was fixed in the case for printing.
An uncorrected proof is a proof version (on paper or in digital form) which is yet to receive final author and publisher approval. The term may also appear on the covers of advance reading copies; see below.
These days, because much typesetting and pre-press work is conducted digitally and transmitted electronically, the term uncorrected proof is more common than the older term galley proof, which refers exclusively to a paper proofing system. However, if a paper print-out of an uncorrected proof is made on a desk-top printer or copy machine and used as a paper proof for authorial or editorial mark-up, it approximates a galley proof, and it may be referred to as a galley.
Preliminary electronic proof versions are also sometimes calleddigital proofs,PDF proofs, andpre-fascicleproofs, the last because they are viewed as single pages, not as they will look when gathered into fascicles orsignaturesfor the press.[6]
Proofs created by the printer for approval by the publisher before going to press are calledfinal proofs. At this stage in production, all mistakes are supposed to have been corrected and the pages are set up in imposition for folding and cutting on the press. To correct a mistake at this stage entails an extra cost per page, so authors are discouraged from making many changes to final proofs, while last-minute corrections by the in-house publishing staff may be accepted.
In the final proof stage, page layouts are examined closely. Additionally, because final page proofs contain the finalpagination, if an index was not compiled at an earlier stage in production, this pagination facilitates compiling a book'sindexand correcting its table of contents.
Historically, some publishers have used paper galley proofs asadvance copies or advance reading copies(ARCs) or as pre-publication publicity proofs. These are provided to reviewers, magazines, and libraries in advance of final publication. These galleys are not sent out for correction, but to ensure timely reviews of newly published works. The list of recipients designated by the publisher limits the number of copies to only what is required, making advance copies a form ofprint-on-demand(POD) publication.
Pre-publication publicity proofs are normally gathered and bound in paper, but in the case of books with four-color printed illustrations, publicity proofs may be lacking illustrations or have them in black and white only.[citation needed]They may be marked or stamped on the cover "uncorrected proof", but the recipient is not expected to proofread them, merely to overlook any minor errors of typesetting.
Galley proofs in electronic form are rarely used as advance reading copies due to the possibility of a recipient editing the proof and issuing it as their own. However, trusted colleagues are occasionally offered electronic advance reading copies, especially if the publisher wishes to quickly typeset a page or two of "advance praise" notices within the book itself.
|
https://en.wikipedia.org/wiki/Galley_proof
|
ISO 5776, published by theInternational Organization for Standardization(ISO), is an international standard that specifies symbols forproofreadingsuch as of manuscripts,typescriptsandprinter's proofs.[1]The total number of symbols specified is 16, each in English, French and Russian.
The standard is partially derived from theBritish StandardBS-5261,[2]but is closer to German standards DIN 16511 and 16549-1. All of these standards date from the time beforedesktop publishing.
A first edition of the standard was published in 1983.[3]
A second edition of the standard was published in 2016 which cancels and replaces the first edition from 1983.[4]
The third revised edition was published in 2022 and replaced the second edition from 2016.[5]
Thistypography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/ISO_5776
|
This article is alist of standard proofreader's marksused to indicate and correct problems in a text. Marks come in two varieties, abbreviations and abstract symbols. These are usually handwritten on the paper containing the text. Symbols are interleaved in the text, while abbreviations may be placed in a margin with an arrow pointing to the problematic text. Different languages use differentproofreadingmarks and sometimes publishers have their own in-house proofreading marks.[1]
These abbreviations are those prescribed by theChicago Manual of Style.[2]Other conventions exist.
Depending on local conventions,underscores(underlines) may be used on manuscripts (and historically on typescripts) to indicate the specialtypefacesto be used:[4][5]
|
https://en.wikipedia.org/wiki/List_of_proofreader%27s_marks
|
Obelismis the practice of annotatingmanuscriptswith marks set in the margins. Modern obelisms are used by editors whenproofreadinga manuscript or typescript. Examples are "stet" (which is Latin for "Let it stand", used in this context to mean "disregard the previous mark") and "dele" (for "Delete").
Theobelossymbol (seeobelus) gets its name from the spit, or sharp end of alanceinancient Greek. An obelos was placed by editors on the margins of manuscripts, especially inHomer, to indicate lines that may not have been written by Homer. The system was developed byAristarchusand notably used later byOrigenin hisHexapla. Origen marked spurious words with an opening obelus and a closing metobelos ("end of obelus").[1]
There were many other suchshorthandsymbols, to indicate corrections, emendations, deletions, additions, and so on. Most used are the editorialcoronis, theparagraphos, the forked paragraphos, the reversed forked paragraphos, thehypodiastole, thedownwards ancora, theupwards ancora, and thedotted right-pointing angle, which is also known as thediple periestigmene. Loosely, all these symbols, and the act of annotation by means of them, areobelism.
These nine ancient Greek textual annotation symbols are also included in the supplemental punctuation list ofISO/IEC 10646 standardfor character sets.
Unicodeencodes the following:
Some of these were also used inAncient Greek punctuationasword dividers.[2]The two-dot punctuation is used as a word separator inOld Turkic script.
This article about theAncient Greeklanguage is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Obelism
|
Theprinting press checkis a step in theprintingprocess. It takes place after aprinting pressis set up but before the print run is underway.
While errors should be corrected during theColor Proofingandproofreadingstages, the main purpose of a press check is to make sure that the color on press comes as close as possible to the color proof. Color proofs are valuable guides, but due to the inherent differences between color proofing techniques and printing itself, proofs will match the printed sheet with varying degrees of exactness.
Areas that are commonly evaluated at a press check are:[1][2][3]
While some printing jobs are delivered as printed, most printing is usually not complete until it is converted into a "finished" product. Post press includes various types of finish work such as trimming, embossing, foiling, die-cutting, scoring, folding and bindery. Post press checking can include:
|
https://en.wikipedia.org/wiki/Press_check_(printing)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.