text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
RDF/XMLis a syntax,[1]defined by theW3C, to express (i.e.serialize) anRDFgraph as anXMLdocument. RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format.
RDF/XML is the primary exchange syntax forOWL 2, and must be supported by all OWL 2 tools.[2]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/RDF/XML
|
RDFaorResource Description Framework in Attributes[1]is aW3CRecommendation that adds a set ofattribute-levelextensions toHTML,XHTMLand various XML-based document types for embedding richmetadatawithin web documents. TheResource Description Framework(RDF) data-model mapping enables the use of RDFs for embedding RDFsubject-predicate-object expressionswithin XHTML documents. RDFa also enables the extraction of RDF model triples by compliantuser agents.
The RDFa community runs awikiwebsite to host tools, examples, and tutorials.[2]
RDFa was first proposed byMark Birbeckin the form of aW3Cnote entitledXHTML and RDF,[3]which was then presented to the Semantic Web Interest Group[4]at the W3C's 2004 Technical Plenary.[5]Later that year the work became part of the sixth public Working Draft of XHTML 2.0.[6][7]Although it is generally assumed that RDFa was originally intended only for XHTML 2, in fact the purpose of RDFa was always to provide a way to add metadata toanyXML-based language. Indeed, one of the earliest documents bearing theRDF/A Syntaxname has the sub-titleA collection of attributes for layering RDF on XML languages.[8]The document was written by Mark Birbeck andSteven Pemberton, and was made available for discussion on October 11, 2004.
In April 2007 the XHTML 2 Working Group produced a module to support RDF annotation within the XHTML 1 family.[9]As an example, it included an extended version of XHTML 1.1 dubbedXHTML+RDFa 1.0. Although described as not representing an intended direction in terms of a formal markup language from the W3C, limited use of the XHTML+RDFa 1.0DTDdid subsequently appear on the public Web.[10]
October 2007 saw the first public Working Draft of a document entitledRDFa in XHTML: Syntax and Processing.[11]This superseded and expanded upon the April draft; it contained rules for creating an RDFa parser, as well as guidelines for organizations wishing to make practical use of the technology.
In October 2008 RDFa 1.0 reached recommendation status.[12]
RDFa 1.1 reached recommendation status in June 2012.[13]It differs from RDFa 1.0 in that it no longer relies on the XML-specific namespace mechanism. Therefore, it is possible to use RDFa 1.1 with non-XML document types such as HTML 4 or HTML 5. Details can be found in an appendix to HTML 5.[14]
An additionalRDFa 1.1 Primerdocument was last updated 17 March 2015.[1](The first public Working Draft dates back to 10 March 2006.[15])
There are some main well-defined variants of the basic concepts, that are used as reference and as abbreviation to the W3C standards.
RDFa was defined in 2008 with the "RDFa in XHTML: Syntax and Processing" Recommendation.[16]Its first application was to be amodule of XHTML.
The HTML applications remained,"a collection of attributes and processing rules for extending XHTML to support RDF"expanded to HTML5, are now expressed in a specialized standard, the "HTML+RDFa" (the last is"HTML+RDFa 1.1 - Support for RDFa in HTML4 and HTML5"[17]).
The"HTML+RDFa"syntax of 2008 was also termed"RDFa 1.0", so, there is no "RDFa Core 1.0" standard.
In general this 2008'sRDFa 1.0is used with the oldXHTMLstandards (as long asRDFa 1.1is used with XHTML5 and HTML5).
Is the first generic (for HTML and XML) RDFa standard; the "RDFa Core 1.1" is in the Third Edition, since 2015.[18]
RDFa Lite is a W3C Recommendation (1.0 and 1.1) since 2009,[19]where it is described as follows:[20]
RDFa Lite is minimal subset of RDFa ... consisting of a few attributes that may be used to expressmachine-readable datain Web documents like HTML, SVG, and XML. While it is not a complete solution for advanced data markup tasks, it does work for most day-to-day needs and can be learned by most Web authors in a day.
RDFa Lite consists of five attributes: vocab, typeof, property, resource, and prefix.[20]RDFa 1.1 Lite is upwards compatible with RDFa 1.1.[20]
In 2009 the W3C was positioned[21]to retainRDFa Liteas unique and definitive standard alternative toMicrodata.[22]The position was confirmed with the publication of the HTML5 Recommendation in 2014.
The essence of RDFa is to provide a set of attributes that can be used to carry metadata in an XML language (hence the 'a' in RDFa).
These attributes are:
There are five "principles of interoperable metadata" met by RDFa.[23]
Additionally RDFa may benefitweb accessibilityas more information is available toassistive technology.[24]
There is a growing number of tools for better usage of RDFa vocabularies and RDFa annotation.
Simplified approaches to semantically annotate information items inwebpageswere greatly encouraged by theHTML+RDFa(released in 2008) andmicroformats(since ~2005) standards.
As of 2013[update]these standards were encoding events, contact information, products, and so on. Despite thevCardsemantics (only basic items ofpersonandorganizationannotations) dominance,[25]and somecloningof annotations along the samedomain, the counting of webpages (URLs) and domains with annotations is an important statistical indicator forusage of semantically annotated informationin the Web.
The statistics of 2017 show that usage[26]of HTML+RDFa is now less than that of Microformats.
The following is an example of addingDublin Coremetadata to an XML element in an XHTML file. Dublin Core data elements are data typically added to a book or article (title, author, subject etc.)
Moreover, RDFa allows the passages and words within a text to be associated with semantic markup:
The following is an example of a complete XHTML+RDFa 1.0 document. It usesDublin CoreandFOAF, an ontology for describing people and their relationships with other people and things:
In the example above, the document URI can be seen as representing an HTML document, but the document URI plus the "#me" stringhttp://example.org/john-d/#merepresents the actual person, as distinct from a document about them. Thefoaf:primaryTopicin the header tells us the URI of the person the document is about. Thefoaf:nickproperty (in the firstspanelement) contains a nickname for this person, and thedc:creatorproperty (in themetaelement) tells us who created the document. The hyperlink to the Einstürzende Neubauten website containsrel="foaf:interest", suggesting that John Doe is interested in this band. The URI of their website is a resource.
Thefoaf:interestinside the secondpelement is referring to a book by ISBN. Theresourceattribute defines a resource in a similar way to thehrefattribute, but without defining a hyperlink. Further into the paragraph, aspanelement containing anaboutattribute defines the book as another resource to specify metadata about. The book title and author are defined within the contents of this tag using thedc:titleanddc:creatorproperties.
Here are the same triples when the above document is automatically converted toRDF/XML:
The above example can be expressed withoutXML namespacesinHTML5:
Note how the prefix foaf is still used without declaration. RDFa 1.1 automatically includes prefixes for popular vocabularies such as FOAF.[30]
The minimal[31]document is:
That is, it is recommended that all of these attributes are used:vocab,typeof,property; not only one of them.
RDFa Structured Data Example
Person Schema in RDFa.[32]
|
https://en.wikipedia.org/wiki/RDFa
|
JSON-LD(JavaScript Object Notation for Linked Data) is a method of encodinglinked datausingJSON. One goal for JSON-LD was to require as little effort as possible from developers to transform their existing JSON to JSON-LD.[1]JSON-LD allows data to be serialized in a way that is similar to traditional JSON.[2]It was initially developed by the JSON for Linking Data Community Group[3]before being transferred to the RDF Working Group[4]for review, improvement, and standardization,[5]and is currently maintained by the JSON-LD Working Group.[6]JSON-LD is aWorld Wide Web Consortium Recommendation.
JSON-LD is designed around the concept of a "context" to provide additional mappings from JSON to anRDFmodel. The context links object properties in a JSON document to concepts in anontology. In order to map the JSON-LD syntax to RDF, JSON-LD allows values to be coerced to a specified type or to be tagged with a language. A context can be embedded directly in a JSON-LD document or put into a separate file and referenced from different documents (from traditional JSON documents via anHTTPLinkheader).
The example above describes a person, based on theFOAF (friend of a friend) ontology. First, the two JSON propertiesnameandhomepageand the typePersonare mapped to concepts in the FOAF vocabulary and the value of thehomepageproperty is specified to be of the type@id. In other words, the homepage id is specified to be anIRIin the context definition. Based on the RDF model, this allows the person described in the document to be unambiguously identified by anIRI. The use of resolvable IRIs allows RDF documents containing more information to betranscludedwhich enables clients to discover new data by simply following those links; this principle is known as 'Follow Your Nose'.[7]
By having all data semantically annotated as in the example, an RDF processor can identify that the document contains information about a person (@type) and if the processor understands the FOAF vocabulary it can determine which properties specify the person's name and homepage.
The encoding is used bySchema.org,[8]Google Knowledge Graph,[9][10]and used mostly forsearch engine optimizationactivities. It has also been used for applications such asbiomedical informatics,[11]and representingprovenanceinformation.[12]It is also the basis ofActivity Streams, a format for "the exchange of information about potential and completed activities",[13]and is used inActivityPub, the federated social networking protocol.[14]Additionally, it is used in the context ofInternet of Things (IoT), where a Thing Description,[15]which is a JSON-LD document, describes the network facing interfaces of IoT devices.
|
https://en.wikipedia.org/wiki/JSON-LD
|
Notation3, orN3as it is more commonly known, is a shorthand non-XMLserialization ofResource Description Frameworkmodels, designed with human-readability in mind: N3 is much more compact and readable than XML RDF notation. The format is being developed byTim Berners-Leeand others from theSemantic Webcommunity. A formalization of the logic underlying N3 was published by Berners-Lee and others in 2008.[1]
N3 has several features that go beyond a serialization for RDF models, such as support for RDF-based rules.Turtleis a simplified, RDF-only subset of N3.
The following is an RDF model in standard XML notation:
may be written in Notation3 like this:
This N3 code above would also be in validTurtlesyntax.
|
https://en.wikipedia.org/wiki/Notation3
|
The following tables compare general and technical information for a number ofrelational database management systems. Please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs.
Theoperating systemsthat the RDBMSes can run on.
Information about what fundamental RDBMS features are implemented natively.
Information about data size limits.
16ZB per instance
[77]
[78]
Information about whattablesandviews(other than basic ones) are supported natively.
Information about whatindexes(other than basicB-/B+ treeindexes) are supported natively.
Information about what other objects are supported natively.
Information about whatpartitioningmethods are supported natively.
Information about access control functionalities.
TheSQLspecification defines what an "SQL schema" is; however, databases implement it differently. To compound this confusion the functionality can overlap with that of a parent database. An SQL schema is simply anamespacewithin a database; things within this namespace are addressed using the memberoperatordot ".". This seems to be a universal among all of the implementations.
A truefully (database, schema, and table) qualifiedquery is exemplified as such:SELECT*FROMdatabase.schema.table
Both a schema and a database can be used to isolate one table, "foo", from another like-named table "foo". The following is pseudo code:
The problem that arises is that formerMySQLusers will create multiple databases for one project. In this context, MySQL databases are analogous in function to PostgreSQL-schemas, insomuch as PostgreSQL deliberately lacks off-the-shelf cross-database functionality (preferring multi-tenancy) that MySQL has. Conversely,PostgreSQLhas applied more of the specification implementing cross-table, cross-schema, and then left room for future cross-database functionality.
MySQL aliasesschemawithdatabasebehind the scenes, such thatCREATE SCHEMAandCREATE DATABASEare analogs. It can therefore be said that MySQL has implemented cross-database functionality, skipped schema functionality entirely, and provided similar functionality into their implementation of a database. In summary, PostgreSQL fully supports schemas and multi-tenancy by strictly separating databases from each other and thus lacks some functionality MySQL has with databases, while MySQL does not even attempt to support standard schemas.
Oracle has its own spin where creating a user is synonymous with creating a schema. Thus a database administrator can create a user called PROJECT and then create a table PROJECT.TABLE. Users can exist without schema objects, but an object is always associated with an owner (though that owner may not have privileges to connect to the database). With the 'shared-everything'Oracle RACarchitecture, the same database can be opened by multiple servers concurrently. This is independent of replication, which can also be used, whereby the data is copied for use by different servers. In the Oracle implementation, a 'database' is a set of files which contains the data while the 'instance' is a set of processes (and memory) through which a database is accessed.
Informix supports multiple databases in a server instance like MySQL. It supports theCREATE SCHEMAsyntax as a way to group DDL statements into a single unit creating all objects created as a part of the schema as a single owner. Informix supports a database mode called ANSI mode which supports creating objects with the same name but owned by different users.
PostgreSQL and some other databases have support for foreign schemas, which is the ability to import schemas from other servers as defined inISO/IEC 9075-9(published as part ofSQL:2008). This appears like any other schema in the database according to the SQL specification while accessing data stored either in a different database or a different server instance. The import can be made either as an entire foreign schema or merely certain tables belonging to that foreign schema.[189]While support for ISO/IEC 9075-9 bridges the gap between the two competing philosophies surrounding schemas, MySQL and Informix maintain an implicit association between databases while ISO/IEC 9075-9 requires that any such linkages be explicit in nature.
|
https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
|
Thedatabase schemais the structure of adatabasedescribed in aformal languagesupported typically by arelational database management system(RDBMS). The term "schema" refers to the organization of data as a blueprint of how the database is constructed (divided into database tables in the case ofrelational databases). The formal definition of adatabaseschema is a set of formulas (sentences) calledintegrity constraintsimposed on a database.[citation needed]These integrity constraints ensure compatibility between parts of the schema. All constraints are expressible in the same language. A database can be considered a structure in realization of thedatabase language.[1]The states of a createdconceptual schemaare transformed into an explicit mapping, the database schema. This describes how real-world entities aremodeledin the database.
"A database schema specifies, based on thedatabase administrator's knowledge of possible applications, the facts that can enter the database, or those of interest to the possibleend-users."[2]The notion of a database schema plays the same role as the notion of theory inpredicate calculus. A model of this "theory" closely corresponds to a database, which can be seen at any instant of time as amathematical object. Thus a schema can contain formulas representingintegrity constraintsspecifically for an application and the constraints specifically for a type of database, all expressed in the same database language.[1]In arelational database, the schema defines thetables,fields,relationships,views,indexes,packages,procedures,functions,queues,triggers,types,sequences,materialized views,synonyms, database links,directories,XML schemas, and other elements.
A database generally stores its schema in adata dictionary. Although a schema is defined in text database language, the term is often used to refer to a graphical depiction of the database structure. In other words, schema is the structure of the database that defines the objects in the database.
In anOracle Databasesystem, the term "schema" has a slightly different connotation.
The requirements listed below influence the detailed structure of schemas that are produced. Certain applications will not require that all of these conditions are met, but these four requirements are the most ideal.
Suppose we want a mediated schema tointegratetwo travel databases, Go-travel and Ok-flight.
Go-travelhas two relations:
Ok-flighthas just one relation:
The overlapping information in Go-travel’s and Ok-flight’s schemas could be represented in a mediated schema:[3]
In the context ofOracle Databases, aschema objectis a logical data storage structure.[4]
An Oracle database associates a separate schema with each databaseuser.[5]A schema comprises a collection of schema objects. Examples of schema objects include:
On the other hand, non-schema objects may include:[6]
Schema objects do not have a one-to-one correspondence to physical files on disk that store their information. However,Oracle databasesstore schema objects logically within atablespaceof the database. The data of each object is physically contained in one or more of the tablespace'sdatafiles. For some objects (such as tables, indexes, and clusters) adatabase administratorcan specify how much disk space the OracleRDBMSallocates for the object within the tablespace's datafiles.
There is no necessary relationship between schemas and tablespaces: a tablespace can contain objects from different schemas, and the objects for a single schema can reside in different tablespaces. Oracle database specificity does, however, enforce platform recognition of nonhomogenized sequence differentials, which is considered a crucial limiting factor in virtualized applications.[7]
InMicrosoft SQL Server, the default schema of every database is the dbo schema.[8]
|
https://en.wikipedia.org/wiki/Database_schema
|
Datalogis adeclarativelogic programminglanguage. While it is syntactically a subset ofProlog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties fromProlog. It is often used as aquery languagefordeductive databases. Datalog has been applied to problems indata integration,networking,program analysis, and more.
A Datalog program consists offacts, which are statements that are held to be true, andrules, which say how to deduce new facts from known facts. For example, here are two facts that meanxerces is a parent of brookeandbrooke is a parent of damocles:
The names are written in lowercase because strings beginning with an uppercase letter stand for variables. Here are two rules:
The:-symbol is read as "if", and the comma is read "and", so these rules mean:
The meaning of a program is defined to be the set of all of the facts that can be deduced using the initial facts and the rules. This program's meaning is given by the following facts:
Some Datalog implementations don't deduce all possible facts, but instead answerqueries:
This query asks:Who are all the X that xerces is an ancestor of?For this example, it would returnbrookeanddamocles.
The non-recursive subset of Datalog is closely related to query languages forrelational databases, such asSQL. The following table maps between Datalog,relational algebra, andSQLconcepts:
More formally, non-recursive Datalog corresponds precisely tounions of conjunctive queries, or equivalently, negation-free relational algebra.
A Datalog program consists of a list ofrules(Horn clauses).[1]Ifconstantandvariableare twocountablesets of constants and variables respectively andrelationis a countable set ofpredicate symbols, then the followingBNF grammarexpresses the structure of a Datalog program:
Atoms are also referred to asliterals. The atom to the left of the:-symbol is called theheadof the rule; the atoms to the right are thebody. Every Datalog program must satisfy the condition that every variable that appears in the head of a rule also appears in the body (this condition is sometimes called therange restriction).[1][2]
There are two common conventions for variable names: capitalizing variables, or prefixing them with a question mark?.[3]
Note that under this definition, Datalog doesnotinclude negation nor aggregates; see§ Extensionsfor more information about those constructs.
Rules with empty bodies are calledfacts. For example, the following rule is a fact:
The set of facts is called theextensional databaseorEDBof the Datalog program. The set of tuples computed by evaluating the Datalog program is called theintensional databaseorIDB.
Many implementations of logic programming extend the above grammar to allow writing facts without the:-, like so:
Some also allow writing 0-ary relations without parentheses, like so:
These are merely abbreviations (syntactic sugar); they have no impact on the semantics of the program.
Program:
There are three widely-used approaches to the semantics of Datalog programs:model-theoretic,fixed-point, andproof-theoretic. These three approaches can be proven equivalent.[4]
An atom is calledgroundif none of its subterms are variables. Intuitively, each of the semantics define the meaning of a program to be the set of all ground atoms that can be deduced from the rules of the program, starting from the facts.
A rule is called ground if all of its atoms (head and body) are ground. A ground ruleR1is aground instanceof another ruleR2ifR1is the result of asubstitutionof constants for all the variables inR2. TheHerbrand baseof a Datalog program is the set of all ground atoms that can be made with the constants appearing in the program. TheHerbrand modelof a Datalog program is the smallest subset of the Herbrand base such that, for each ground instance of each rule in the program, if the atoms in the body of the rule are in the set, then so is the head.[5]The model-theoretic semantics define the minimal Herbrand model to be the meaning of the program.
LetIbe thepower setof the Herbrand base of a programP. Theimmediate consequence operatorforPis a mapTfromItoIthat adds all of the new ground atoms that can be derived from the rules of the program in a single step. The least-fixed-point semantics define the least fixed point ofTto be the meaning of the program; this coincides with the minimal Herbrand model.[6]
Thefixpointsemantics suggest an algorithm for computing the minimal model: Start with the set of ground facts in the program, then repeatedly add consequences of the rules until a fixpoint is reached. This algorithm is callednaïve evaluation.
The proof-theoretic semantics defines the meaning of a Datalog program to be the set of facts with correspondingproof trees. Intuitively, a proof tree shows how to derive a fact from the facts and rules of a program.
One might be interested in knowing whether or not a particular ground atom appears in the minimal Herbrand model of a Datalog program, perhaps without caring much about the rest of the model. A top-down reading of the proof trees described above suggests an algorithm for computing the results of suchqueries. This reading informs theSLD resolutionalgorithm, which forms the basis for the evaluation ofProlog.
There are many different ways to evaluate a Datalog program, with different performance characteristics.
Bottom-up evaluation strategies start with the facts in the program and repeatedly apply the rules until either some goal or query is established, or until the complete minimal model of the program is produced.
Naïve evaluationmirrors thefixpoint semanticsfor Datalog programs. Naïve evaluation uses a set of "known facts", which is initialized to the facts in the program. It proceeds by repeatedly enumerating all ground instances of each rule in the program. If each atom in the body of the ground instance is in the set of known facts, then the head atom is added to the set of known facts. This process is repeated until a fixed point is reached, and no more facts may be deduced. Naïve evaluation produces the entire minimal model of the program.[7]
Semi-naïve evaluation is a bottom-up evaluation strategy that can be asymptotically faster than naïve evaluation.[8]
Naïve and semi-naïve evaluation both evaluate recursive Datalog rules by repeatedly applying them to a set of known facts until a fixed point is reached. In each iteration, rules are only run for "one step", i.e., non-recursively. As mentionedabove, each non-recursive Datalog rule corresponds precisely to aconjunctive query. Therefore, many of the techniques fromdatabase theoryused to speed up conjunctive queries are applicable to bottom-up evaluation of Datalog, such as
Many such techniques are implemented in modern bottom-up Datalog engines such asSoufflé. Some Datalog engines integrate SQL databases directly.[17]
Bottom-up evaluation of Datalog is also amenable toparallelization. Parallel Datalog engines are generally divided into two paradigms:
SLD resolutionis sound and complete for Datalog programs.
Top-down evaluation strategies begin with aqueryorgoal. Bottom-up evaluation strategies can answer queries by computing the entire minimal model and matching the query against it, but this can be inefficient if the answer only depends on a small subset of the entire model. Themagic setsalgorithm takes a Datalog program and a query, and produces a more efficient program that computes the same answer to the query while still using bottom-up evaluation.[23]A variant of the magic sets algorithm has been shown to produce programs that, when evaluated usingsemi-naïve evaluation, are as efficient as top-down evaluation.[24]
Thedecision problemformulation of Datalog evaluation is as follows: Given a Datalog programPsplit into a set of facts (EDB)Eand a set of rulesR, and a ground atomA, isAin the minimal model ofP? In this formulation, there are three variations of thecomputational complexityof evaluating Datalog programs:[25]
With respect to data complexity, the decision problem for Datalog isP-complete(See Theorem 4.4 in[25]). P-completeness for data complexity means that there exists a fixed datalog query for which evaluation is P-complete. The proof is based onDatalog metainterpreterfor propositional logic programs.
With respect to program complexity, the decision problem isEXPTIME-complete. In particular, evaluating Datalog programs always terminates; Datalog is notTuring-complete.
Some extensions to Datalog do not preserve these complexity bounds. Extensions implemented in someDatalog engines, such as algebraic data types, can even make the resulting language Turing-complete.
Several extensions have been made to Datalog, e.g., to support negation,aggregate functions, inequalities, to allowobject-oriented programming, or to allowdisjunctionsas heads ofclauses. These extensions have significant impacts on the language's semantics and on the implementation of a corresponding interpreter.
Datalog is a syntactic subset ofProlog,disjunctive Datalog,answer set programming,DatalogZ, andconstraint logic programming. When evaluated as an answer set program, a Datalog program yields a single answer set, which is exactly its minimal model.[26]
Many implementations of Datalog extend Datalog with additional features; see§ Datalog enginesfor more information.
Datalog can be extended to supportaggregate functions.[27]
Notable Datalog engines that implement aggregation include:
Adding negation to Datalog complicates its semantics, leading to whole new languages and strategies for evaluation. For example, the language that results from adding negation with thestable model semanticsis exactlyanswer set programming.
Stratifiednegation can be added to Datalog while retaining its model-theoretic and fixed-point semantics. Notable Datalog engines that implement stratified negation include:
Unlike inProlog, statements of a Datalog program can be stated in any order. Datalog does not have Prolog'scutoperator. This makes Datalog a fullydeclarative language.
In contrast to Prolog, Datalog
This article deals primarily with Datalog without negation (see alsoSyntax and semantics of logic programming § Extending Datalog with negation). However, stratified negation is a common addition to Datalog; the following list contrastsPrologwith Datalog with stratified negation. Datalog with stratified negation
Datalog generalizes many other query languages. For instance,conjunctive queriesandunion of conjunctive queriescan be expressed in Datalog. Datalog can also expressregular path queries.
When we considerordered databases, i.e., databases with anorder relationon theiractive domain, then theImmerman–Vardi theoremimplies that the expressive power of Datalog is precisely that of the classPTIME: a property can be expressed in Datalog if and only if it is computable in polynomial time.[31]
Theboundedness problemfor Datalog asks, given a Datalog program, whether it isbounded, i.e., the maximal recursion depth reached when evaluating the program on an input database can be bounded by some constant. In other words, this question asks whether the Datalog program could be rewritten as a nonrecursive Datalog program, or, equivalently, as aunion of conjunctive queries. Solving the boundedness problem on arbitrary Datalog programs isundecidable,[32]but it can be made decidable by restricting to some fragments of Datalog.
Systems that implement languages inspired by Datalog, whethercompilers,interpreters,libraries, orembedded DSLs, are referred to asDatalog engines. Datalog engines often implement extensions of Datalog, extending it with additionaldata types,foreign function interfaces, or support for user-definedlattices. Such extensions may allow for writingnon-terminatingor otherwise ill-defined programs.[citation needed]
Here is a short list of systems that are either based on Datalog or provide a Datalog interpreter:
Datalog is quite limited in its expressivity. It is notTuring-complete, and doesn't include basic data types such asintegersorstrings. This parsimony is appealing from a theoretical standpoint, but it means Datalogper seis rarely used as a programming language orknowledge representationlanguage.[41]MostDatalog enginesimplement substantial extensions of Datalog. However, Datalog has a strong influence on such implementations, and many authors don't bother to distinguish them from Datalog as presented in this article. Accordingly, the applications discussed in this section include applications of realistic implementations of Datalog-based languages.
Datalog has been applied to problems indata integration,information extraction,networking,security,cloud computingandmachine learning.[42][43]Googlehas developed an extension to Datalog forbig dataprocessing.[44]
Datalog has seen application instatic program analysis.[45]TheSoufflédialect has been used to writepointer analysesforJavaand acontrol-flow analysisforScheme.[46][47]Datalog has been integrated withSMT solversto make it easier to write certain static analyses.[48]TheFlixdialect is also suited to writing static program analyses.[49]
Some widely used database systems include ideas and algorithms developed for Datalog. For example, theSQL:1999standard includesrecursive queries, and the Magic Sets algorithm (initially developed for the faster evaluation of Datalog queries) is implemented in IBM'sDB2.[50]
The origins of Datalog date back to the beginning oflogic programming, but it became prominent as a separate area around 1977 when Hervé Gallaire andJack Minkerorganized a workshop onlogicanddatabases.[51]David Maieris credited with coining the term Datalog.[52]
|
https://en.wikipedia.org/wiki/Datalog
|
Incomputing, adata warehouse(DWorDWH), also known as anenterprise data warehouse(EDW), is a system used forreportinganddata analysisand is a core component ofbusiness intelligence.[1]Data warehouses are centralrepositoriesof data integrated from disparate sources. They store current and historical data organized in a way that is optimized for data analysis, generation of reports, and developing insights across the integrated data.[2]They are intended to be used by analysts and managers to help make organizational decisions.[3]
The data stored in the warehouse isuploadedfromoperational systems(such as marketing or sales). The data may pass through anoperational data storeand may requiredata cleansingfor additional operations to ensuredata qualitybefore it is used in the data warehouse for reporting.
The two main workflows for building a data warehouse system areextract, transform, load(ETL) andextract, load, transform(ELT).
The environment for data warehouses and marts includes the following:
Operational databases are optimized for the preservation ofdata integrityand speed of recording of business transactions through use ofdatabase normalizationand anentity–relationship model. Operational system designers generally followCodd's 12 rulesofdatabase normalizationto ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables.Relational databasesare efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged.
Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of acolumn-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse.
Online analytical processing(OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used fordata mining. OLAP databases store aggregated, historical data in multi-dimensional schemas (usuallystar schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing.
Online transaction processing(OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintainingdata integrityin multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually3NF).[citation needed]Normalization is the norm for data modeling techniques in this system.
Predictive analyticsis aboutfindingand quantifying hidden patterns in the data using complex mathematical models to prepare for different future outcomes, including demand forproducts, and make better decisions. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used forcustomer relationship management(CRM).
Adata martis a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data.[4]As with warehouses, stored data is usually not normalized.
Types of data marts includedependent, independent, and hybrid data marts.[clarification needed]
The typicalextract, transform, load(ETL)-based data warehouse usesstaging,data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in anoperational data store(ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and intofactsand aggregate facts. The combination of facts and dimensions is sometimes called astar schema. The access layer helps users retrieve data.[5]
The main source of the data iscleansed, transformed, catalogued, and made available for use by managers and other business professionals fordata mining,online analytical processing,market researchanddecision support.[6]However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage thedata dictionaryare also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includesbusiness intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrievemetadata.
ELT-based data warehousing gets rid of a separateETLtool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse.
A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:
The concept of data warehousing dates back to the late 1980s[7]when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems todecision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to aslegacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users.
Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise.
Key developments in early years of data warehousing:
A fact is a value or measurement in the system being managed.
Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if abase transceiver station(BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system:
Raw facts are aggregated to higher levels in variousdimensionsto extract information more relevant to the service or business. These are called aggregated facts or summaries.
For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example:
The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses astar schemaas proposed byRalph Kimball. The normalized approach, also called thethird normal form(3NF) is an entity-relational normalized model proposed by Bill Inmon.[21]
In adimensional approach,transaction datais partitioned into "facts", which are usually numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order.
This dimensional approach makes data easier to understand and speeds up data retrieval.[15]Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse.
The model of facts and dimensions can also be understood as adata cube,[22]where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates.
The main disadvantages of the dimensional approach are:
In the normalized approach, the data in the warehouse are stored following, to a degree,database normalizationrules. Normalized relational database tables are grouped intosubject areas(for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008).
The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and thedata structureof the data warehouse.
Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).
InInformation-Driven Business,[23]Robert Hillardcompares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms ofinformation entropyand usability in terms of the Small Worlds data transformation measure.[24]
In thebottom-upapproach,data martsare first created to provide reporting and analytical capabilities for specificbusiness processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection ofconformed dimensionsandconformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts.[25]
Thetop-downapproach is designed using a normalized enterprisedata model."Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse.[26]
Data warehouses often resemble thehub and spokes architecture.Legacy systemsfeeding the warehouse often includecustomer relationship managementandenterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate theextract transform loadprocess, data warehouses often make use of anoperational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse.
A hybrid (also called ensemble) data warehouse database is kept onthird normal formto eliminatedata redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with amaster data managementrepository where operational (not static) information could reside.
Thedata vault modelingcomponents follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form andstar schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes.
There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity.
Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is notdatabase normalization. Subject orientation can be really useful for decision-making.
Gathering the required objects is called subject-oriented.
The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth.
While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.)[27]
The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so).[28]
In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details.[27]
Withdata virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.[29]
The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned.[30]
These terms refer to the level of sophistication of a data warehouse:
In thehealthcaresector, data warehouses are critical components ofhealth informatics, enabling the integration, storage, and analysis of large volumes of clinical, administrative, and operational data. These systems consolidate information from disparate sources such aselectronic health records(EHRs),laboratory information systems,picture archiving and communication systems(PACS), andmedical billingplatforms. By centralizing data, healthcare data warehouses support a range of functions includingpopulation health,clinical decision support, quality improvement,public health surveillance, andmedical research.
Healthcare data warehouses often incorporate specialized data models that account for the complexity and sensitivity of medical data, such as temporal information (e.g., longitudinal patient histories), coded terminologies (e.g.,ICD-10,SNOMED CT), and compliance with privacy regulations (e.g.,HIPAAin the United States orGDPRin the European Union).
Following is a list of major patient data warehouses with broad scope (not disease- orspecialty-specific), with variables including laboratory results, pharmacy, age, race, socioeconomic status, comorbidities and longitudinal changes:
These warehouses enable data-driven healthcare by supporting retrospective studies,comparative effectiveness research, andpredictive analytics, often with the use ofhealthcare-applied artificial intelligence.
|
https://en.wikipedia.org/wiki/Data_warehouse
|
This is alist ofrelational database management systems.
|
https://en.wikipedia.org/wiki/List_of_relational_database_management_systems
|
Anobject databaseorobject-oriented databaseis adatabase management systemin which information is represented in the form ofobjectsas used inobject-oriented programming. Object databases are different fromrelational databaseswhich are table-oriented. A third type,object–relational databases, is a hybrid of both approaches.
Object databases have been considered since the early 1980s.[2]
Object-oriented database management systems (OODBMSs) also called ODBMS (Object Database Management System) combine database capabilities withobject-oriented programminglanguage capabilities.
OODBMSs allow object-oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the OODBMS. Because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the OODBMS and the programming language will use the same model of representation. Relational DBMS projects, by way of contrast, maintain a clearer division between thedatabase modeland the application.
As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMSs to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilizecomputer-aided design(CAD).[3]
Some object-oriented databases are designed to work well withobject-oriented programming languagessuch asDelphi,Ruby,Python,JavaScript,Perl,Java,C#,Visual Basic .NET,C++,Objective-CandSmalltalk; others such asJADEhave their own programming languages. OODBMSs use exactly the same model as object-oriented programming languages.
Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985.[4]Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporationor MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press.[5]
Early commercial products includedGemstone(Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). Additional commercial products entered the market in the late 1980s through the mid 1990s. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software),Objectivity/DB(Objectivity, Inc.),ObjectStore(Progress Software, acquired from eXcelon which was originallyObject Design, Incorporated), ONTOS (Ontos, Inc., name changed from Ontologic), O2[6](O2Technology, merged with several companies, acquired byInformix, which was in turn acquired byIBM), POET (nowFastObjectsfrom Versant which acquired Poet Software), Versant Object Database (VersantCorporation), VOSS (Logic Arts) andJADE(Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such asInterSystems Caché.
Object database management systems added the concept ofpersistenceto object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System forSmalltalk). For much of the 1990s,C++dominated the commercial object database management market. Vendors addedJavain the late 1990s and more recently,C#.
Starting in 2004, object databases have seen a second growth period whenopen sourceobject databases emerged that were widely affordable and easy to use, because they are entirely written inOOPlanguages like Smalltalk, Java, or C#, such as Versant'sdb4o(db4objects), DTS/S1 from Obsidian Dynamics andPerst(McObject), available under dualopen sourceand commercial licensing.
Object databases based on persistent programming acquired a niche in application areas such as
engineering andspatial databases,telecommunications, and scientific areas such ashigh energy physics[13]andmolecular biology.[14]
Another group of object databases focuses on embedded use in devices, packaged software, andreal-timesystems.
Most object databases also offer some kind ofquery language, allowing objects to be found using adeclarative programmingapproach. It is in the area of object query languages, and the integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by theODMGwith theObject Query Language, OQL.
Access to data can be faster because an object can be retrieved directly without a search, by followingpointers.
Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions.
Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation.
Many object databases, for example Gemstone or VOSS, offer support forversioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support fortriggersand constraints which are the basis ofactive databases.
The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc.
TheObject Data Management Groupwas a consortium of object database and object–relational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. It published several versions of its specification. The last release was ODMG 3.0. By 2001, most of the major object database and object–relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to theJava Community Processas a basis for theJava Data Objectsspecification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001.
Many object database ideas were also absorbed intoSQL:1999and have been implemented in varying degrees inobject–relational databaseproducts.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, to express queries. As a result,Native Queriesemerged. Similarly, Microsoft announcedLanguage Integrated Query(LINQ) and DLINQ, an implementation of LINQ, in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9.
In February 2006, theObject Management Group(OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of the Object Database Technology Working Group (ODBT WG). The ODBT WG planned to create a set of standards that would incorporate advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). The work of the ODBT WG was suspended in March 2009 when, subsequent to the economic turmoil in late 2008, the ODB vendors involved in this effort decided to focus their resources elsewhere.
In January 2007 theWorld Wide Web Consortiumgave final recommendation status to theXQuerylanguage. XQuery usesXMLas its data model. Some of the ideas developed originally for object databases found their way into XQuery, but XQuery is not intrinsically object-oriented. Because of the popularity of XML, XQuery engines compete with object databases as a vehicle for storage of data that is too complex or variable to hold conveniently in a relational database. XQuery also allows modules to be written to provide encapsulation features that have been provided by Object-Oriented systems.
XQuery v1andXPath v2and later are powerful and are available in both open source and libre (FOSS) software,[15][16][17]as well as in commercial systems. They are easy to learn and use, and very powerful and fast. They are not relational and XQuery is not based on SQL (although one of the people who designed XQuery also co-invented SQL). But they are also not object-oriented, in the programming sense: XQuery does not use encapsulation with hiding, implicit dispatch, and classes and methods. XQuery databases generally use XML and JSON as an interchange format, although other formats are used.
Since the early 2000sJSONhas gained community adoption and popularity in applications where developers are in control of the data format.JSONiq, a query-analog of XQuery for JSON (sharing XQuery's core expressions and operations), demonstrated the functional equivalence of the JSON and XML formats for data-oriented information. In this context, the main strategy of OODBMS maintainers was to retrofit JSON to their databases (by using it as the internal data type).
In January 2016, with thePostgreSQL 9.5 release[18]was the first FOSS OODBMS to offer an efficient JSON internal datatype (JSONB) with a complete set of functions and operations, for all basic relational and non-relational manipulations.
An object database stores complex data and relationships between data directly, without mapping to relationalrowsandcolumns, and this makes them suitable for applications dealing with very complex data.[19]Objects have a many-to-many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of an OODBMS is that it can be programmed with small procedural differences without affecting the entire system.[20]
|
https://en.wikipedia.org/wiki/Object_database
|
Incomputing,online analytical processing (OLAP)(/ˈoʊlæp/), is an approach to quickly answermulti-dimensional analytical(MDA) queries.[1]The termOLAPwas created as a slight modification of the traditional database termonline transaction processing(OLTP).[2]OLAP is part of the broader category ofbusiness intelligence, which also encompassesrelational databases, report writing anddata mining.[3]Typical applications of OLAP includebusiness reportingfor sales,marketing, management reporting,business process management(BPM),[4]budgetingandforecasting,financial reportingand similar areas, with new applications emerging, such asagriculture.[5]
OLAP tools enable users to analyse multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing.[6]: 402–403Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of theOLAP cubeand view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.).
Databasesconfigured for OLAP use a multidimensional data model, allowing for complex analytical andad hocqueries with a rapid execution time.[7]They borrow aspects ofnavigational databases,hierarchical databasesand relational databases.
OLAP is typically contrasted toOLTP(online transaction processing), which is generally characterized by much less complex queries, in a larger volume, to process transactions rather than for the purpose of business intelligence or reporting. Whereas OLAP systems are mostly optimized for read, OLTP has to process all kinds of queries (read, insert, update and delete).
At the core of any OLAP system is anOLAP cube(also called a 'multidimensional cube' or ahypercube). It consists of numeric facts calledmeasuresthat are categorized bydimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as avector space. The usual interface to manipulate an OLAP cube is a matrix interface, likePivot tablesin a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging.
The cube metadata is typically created from astar schemaorsnowflake schemaorfact constellationof tables in arelational database. Measures are derived from the records in thefact tableand dimensions are derived from thedimension tables.
Eachmeasurecan be thought of as having a set oflabels, or meta-data associated with it. Adimensionis what describes theselabels; it provides information about themeasure.
A simple example would be a cube that contains a store's sales as ameasure, and Date/Time as adimension. Each Sale has a Date/Timelabelthat describes more about that sale.
For example:
Multidimensional structure is defined as "a variation of the relational model that uses multidimensional structures to organize data and express the relationships between data".[6]: 177The structure is broken into cubes and the cubes are able to store and access data within the confines of each cube. "Each cell within a multidimensional structure contains aggregated data related to elements along each of its dimensions".[6]: 178Even when data is manipulated it remains easy to access and continues to constitute a compact database format. The data still remains interrelated. Multidimensional structure is quite popular for analytical databases that use online analytical processing (OLAP) applications.[6]Analytical databases use these databases because of their ability to deliver answers to complex business queries swiftly. Data can be viewed from different angles, which gives a broader perspective of a problem unlike other models.[8]
It has been claimed that for complex queries OLAP cubes can produce an answer in around 0.1% of the time required for the same query onOLTPrelational data.[9][10]The most important mechanism in OLAP which allows it to achieve such performance is the use ofaggregations. Aggregations are built from the fact table by changing the granularity on specific dimensions and aggregating up data along these dimensions, using anaggregate function(oraggregation function). The number of possible aggregations is determined by every possible combination of dimension granularities.
The combination of all possible aggregations and the base data contains the answers to every query which can be answered from the data.[11]
Because usually there are many aggregations that can be calculated, often only a predetermined number are fully calculated; the remainder are solved on demand. The problem of deciding which aggregations (views) to calculate is known as the view selection problem. View selection can be constrained by the total size of the selected set of aggregations, the time to update them from changes in the base data, or both. The objective of view selection is typically to minimize the average time to answer OLAP queries, although some studies also minimize the update time. View selection isNP-complete. Many approaches to the problem have been explored, includinggreedy algorithms, randomized search,genetic algorithmsandA* search algorithm.
Some aggregation functions can be computed for the entire OLAP cube byprecomputingvalues for each cell, and then computing the aggregation for a roll-up of cells by aggregating these aggregates, applying adivide and conquer algorithmto the multidimensional problem to compute them efficiently.[12]For example, the overall sum of a roll-up is just the sum of the sub-sums in each cell. Functions that can be decomposed in this way are calleddecomposable aggregation functions, and includeCOUNT, MAX, MIN,andSUM, which can be computed for each cell and then directly aggregated; these are known as self-decomposable aggregation functions.[13]
In other cases, the aggregate function can be computed by computing auxiliary numbers for cells, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples includeAVERAGE(tracking sum and count, dividing at the end) andRANGE(tracking max and min, subtracting at the end). In other cases, the aggregate function cannot be computed without analyzing the entire set at once, though in some cases approximations can be computed; examples includeDISTINCT COUNT, MEDIAN,andMODE; for example, the median of a set is not the median of medians of subsets. These latter are difficult to implement efficiently in OLAP, as they require computing the aggregate function on the base data, either computing them online (slow) or precomputing them for possible rollouts (large space).
OLAP systems have been traditionally categorized using the following taxonomy.[14]
MOLAP (multi-dimensional online analytical processing) is the classic form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in an optimized multi-dimensional array storage, rather than in a relational database.
Some MOLAP tools require thepre-computationand storage of derived data, such as consolidations – the operation known as processing. Such MOLAP tools generally utilize a pre-calculated data set referred to as adata cube. The data cube contains all the possible answers to a given range of questions. As a result, they have a very fast response to queries. On the other hand, updating can take a long time depending on the degree of pre-computation. Pre-computation can also lead to what is known as data explosion.
Other MOLAP tools, particularly those that implement thefunctional database modeldo not pre-compute derived data but make all calculations on demand other than those that were previously requested and stored in a cache.
Advantages of MOLAP
Disadvantages of MOLAP
Examples of commercial products that use MOLAP areCognosPowerplay,Oracle Database OLAP Option,MicroStrategy,Microsoft Analysis Services,Essbase,TM1,Jedox, and icCube.
ROLAPworks directly with relational databases and does not require pre-computation. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. It depends on a specialized schema design. This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. ROLAP tools do not use pre-calculated data cubes but instead pose the query to the standard relational database and its tables in order to bring back the data required to answer the question. ROLAP tools feature the ability to ask any question because the methodology is not limited to the contents of a cube. ROLAP also has the ability to drill down to the lowest level of detail in the database.
While ROLAP uses a relational database source, generally the database must be carefully designed for ROLAP use. A database which was designed forOLTPwill not function well as a ROLAP database. Therefore, ROLAP still involves creating an additional copy of the data. However, since it is a database, a variety of technologies can be used to populate the database.
In the OLAP industry ROLAP is usually perceived as being able to scale for large data volumes but suffering from slower query performance as opposed toMOLAP. TheOLAP Survey[usurped], the largest independent survey across all major OLAP products, being conducted for 6 years (2001 to 2006) have consistently found that companies using ROLAP report slower performance than those using MOLAP even when data volumes were taken into consideration.
However, as with any survey there are a number of subtle issues that must be taken into account when interpreting the results.
Some companies select ROLAP because they intend to re-use existing relational database tables—these tables will frequently not be optimally designed for OLAP use. The superior flexibility of ROLAP tools allows this less-than-optimal design to work, but performance suffers.MOLAPtools in contrast would force the data to be re-loaded into an optimal OLAP design.
The undesirable trade-off between additionalETLcost and slow query performance has ensured that most commercial OLAP tools now use a "Hybrid OLAP" (HOLAP) approach, which allows the model designer to decide which portion of the data will be stored inMOLAPand which portion in ROLAP.
There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage.[15]For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data. HOLAP addresses the shortcomings ofMOLAPandROLAPby combining the capabilities of both approaches. HOLAP tools can utilize both pre-calculated cubes and relational data sources.
In this mode HOLAP storesaggregationsinMOLAPfor fast query performance, and detailed data inROLAPto optimize time of cubeprocessing.
In this mode HOLAP stores some slice of data, usually the more recent one (i.e. sliced by Time dimension) inMOLAPfor fast query performance, and older data inROLAP. Moreover, we can store some dices inMOLAPand others inROLAP, leveraging the fact that in a large cuboid, there will be dense and sparse subregions.[16]
The first product to provide HOLAP storage wasHolos, but the technology also became available in other commercial products such asMicrosoft Analysis Services,Oracle Database OLAP Option,MicroStrategyandSAP AGBI Accelerator. The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP. For example, a HOLAP server may store large volumes of detailed data in a relational database, while aggregations are kept in a separate MOLAP store. The Microsoft SQL Server 7.0 OLAP Services supports a hybrid OLAP server
Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers.
The following acronyms are also sometimes used, although they are not as widespread as the ones above:
Unlikerelational databases, which had SQL as the standard query language, and widespreadAPIssuch asODBC,JDBCandOLEDB, there was no such unification in the OLAP world for a long time. The first real standard API wasOLE DB for OLAPspecification fromMicrosoftwhich appeared in 1997 and introduced theMDXquery language. Several OLAP vendors – both server and client – adopted it. In 2001 Microsoft andHyperionannounced theXML for Analysisspecification, which was endorsed by most of the OLAP vendors. Since this also used MDX as a query language, MDX became the de facto standard.[26]Since September-2011LINQcan be used to querySSASOLAP cubes from Microsoft .NET.[27]
The first product that performed OLAP queries wasExpress,which was released in 1970 (and acquired byOraclein 1995 from Information Resources).[28]However, the term did not appear until 1993 when it was coined byEdgar F. Codd, who has been described as "the father of the relational database". Codd's paper[1]resulted from a short consulting assignment which Codd undertook for former Arbor Software (laterHyperion Solutions, and in 2007 acquired by Oracle), as a sort of marketing coup.
The company had released its own OLAP product,Essbase, a year earlier. As a result, Codd's "twelve laws of online analytical processing" were explicit in their reference to Essbase. There was some ensuing controversy and when Computerworld learned that Codd was paid by Arbor, it retracted the article. The OLAP market experienced strong growth in the late 1990s with dozens of commercial products going into market. In 1998, Microsoft released its first OLAP Server –Microsoft Analysis Services, which drove wide adoption of OLAP technology and moved it into the mainstream.
OLAP clients include many spreadsheet programs like Excel, web application, SQL, dashboard tools, etc. Many clients support interactive data exploration where users select dimensions and measures of interest. Some dimensions are used as filters (for slicing and dicing the data) while others are selected as the axes of a pivot table or pivot chart. Users can also vary aggregation level (for drilling-down or rolling-up) the displayed view. Clients can also offer a variety of graphical widgets such as sliders, geographic maps, heat maps and more which can be grouped and coordinated as dashboards. An extensive list of clients appears in the visualization column of thecomparison of OLAP serverstable.
Below is a list of top OLAP vendors in 2006, with figures in millions ofUS Dollars.[29]
|
https://en.wikipedia.org/wiki/Online_analytical_processing
|
Incomputing,online analytical processing (OLAP)(/ˈoʊlæp/), is an approach to quickly answermulti-dimensional analytical(MDA) queries.[1]The termOLAPwas created as a slight modification of the traditional database termonline transaction processing(OLTP).[2]OLAP is part of the broader category ofbusiness intelligence, which also encompassesrelational databases, report writing anddata mining.[3]Typical applications of OLAP includebusiness reportingfor sales,marketing, management reporting,business process management(BPM),[4]budgetingandforecasting,financial reportingand similar areas, with new applications emerging, such asagriculture.[5]
OLAP tools enable users to analyse multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing.[6]: 402–403Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of theOLAP cubeand view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.).
Databasesconfigured for OLAP use a multidimensional data model, allowing for complex analytical andad hocqueries with a rapid execution time.[7]They borrow aspects ofnavigational databases,hierarchical databasesand relational databases.
OLAP is typically contrasted toOLTP(online transaction processing), which is generally characterized by much less complex queries, in a larger volume, to process transactions rather than for the purpose of business intelligence or reporting. Whereas OLAP systems are mostly optimized for read, OLTP has to process all kinds of queries (read, insert, update and delete).
At the core of any OLAP system is anOLAP cube(also called a 'multidimensional cube' or ahypercube). It consists of numeric facts calledmeasuresthat are categorized bydimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as avector space. The usual interface to manipulate an OLAP cube is a matrix interface, likePivot tablesin a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging.
The cube metadata is typically created from astar schemaorsnowflake schemaorfact constellationof tables in arelational database. Measures are derived from the records in thefact tableand dimensions are derived from thedimension tables.
Eachmeasurecan be thought of as having a set oflabels, or meta-data associated with it. Adimensionis what describes theselabels; it provides information about themeasure.
A simple example would be a cube that contains a store's sales as ameasure, and Date/Time as adimension. Each Sale has a Date/Timelabelthat describes more about that sale.
For example:
Multidimensional structure is defined as "a variation of the relational model that uses multidimensional structures to organize data and express the relationships between data".[6]: 177The structure is broken into cubes and the cubes are able to store and access data within the confines of each cube. "Each cell within a multidimensional structure contains aggregated data related to elements along each of its dimensions".[6]: 178Even when data is manipulated it remains easy to access and continues to constitute a compact database format. The data still remains interrelated. Multidimensional structure is quite popular for analytical databases that use online analytical processing (OLAP) applications.[6]Analytical databases use these databases because of their ability to deliver answers to complex business queries swiftly. Data can be viewed from different angles, which gives a broader perspective of a problem unlike other models.[8]
It has been claimed that for complex queries OLAP cubes can produce an answer in around 0.1% of the time required for the same query onOLTPrelational data.[9][10]The most important mechanism in OLAP which allows it to achieve such performance is the use ofaggregations. Aggregations are built from the fact table by changing the granularity on specific dimensions and aggregating up data along these dimensions, using anaggregate function(oraggregation function). The number of possible aggregations is determined by every possible combination of dimension granularities.
The combination of all possible aggregations and the base data contains the answers to every query which can be answered from the data.[11]
Because usually there are many aggregations that can be calculated, often only a predetermined number are fully calculated; the remainder are solved on demand. The problem of deciding which aggregations (views) to calculate is known as the view selection problem. View selection can be constrained by the total size of the selected set of aggregations, the time to update them from changes in the base data, or both. The objective of view selection is typically to minimize the average time to answer OLAP queries, although some studies also minimize the update time. View selection isNP-complete. Many approaches to the problem have been explored, includinggreedy algorithms, randomized search,genetic algorithmsandA* search algorithm.
Some aggregation functions can be computed for the entire OLAP cube byprecomputingvalues for each cell, and then computing the aggregation for a roll-up of cells by aggregating these aggregates, applying adivide and conquer algorithmto the multidimensional problem to compute them efficiently.[12]For example, the overall sum of a roll-up is just the sum of the sub-sums in each cell. Functions that can be decomposed in this way are calleddecomposable aggregation functions, and includeCOUNT, MAX, MIN,andSUM, which can be computed for each cell and then directly aggregated; these are known as self-decomposable aggregation functions.[13]
In other cases, the aggregate function can be computed by computing auxiliary numbers for cells, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples includeAVERAGE(tracking sum and count, dividing at the end) andRANGE(tracking max and min, subtracting at the end). In other cases, the aggregate function cannot be computed without analyzing the entire set at once, though in some cases approximations can be computed; examples includeDISTINCT COUNT, MEDIAN,andMODE; for example, the median of a set is not the median of medians of subsets. These latter are difficult to implement efficiently in OLAP, as they require computing the aggregate function on the base data, either computing them online (slow) or precomputing them for possible rollouts (large space).
OLAP systems have been traditionally categorized using the following taxonomy.[14]
MOLAP (multi-dimensional online analytical processing) is the classic form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in an optimized multi-dimensional array storage, rather than in a relational database.
Some MOLAP tools require thepre-computationand storage of derived data, such as consolidations – the operation known as processing. Such MOLAP tools generally utilize a pre-calculated data set referred to as adata cube. The data cube contains all the possible answers to a given range of questions. As a result, they have a very fast response to queries. On the other hand, updating can take a long time depending on the degree of pre-computation. Pre-computation can also lead to what is known as data explosion.
Other MOLAP tools, particularly those that implement thefunctional database modeldo not pre-compute derived data but make all calculations on demand other than those that were previously requested and stored in a cache.
Advantages of MOLAP
Disadvantages of MOLAP
Examples of commercial products that use MOLAP areCognosPowerplay,Oracle Database OLAP Option,MicroStrategy,Microsoft Analysis Services,Essbase,TM1,Jedox, and icCube.
ROLAPworks directly with relational databases and does not require pre-computation. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. It depends on a specialized schema design. This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. ROLAP tools do not use pre-calculated data cubes but instead pose the query to the standard relational database and its tables in order to bring back the data required to answer the question. ROLAP tools feature the ability to ask any question because the methodology is not limited to the contents of a cube. ROLAP also has the ability to drill down to the lowest level of detail in the database.
While ROLAP uses a relational database source, generally the database must be carefully designed for ROLAP use. A database which was designed forOLTPwill not function well as a ROLAP database. Therefore, ROLAP still involves creating an additional copy of the data. However, since it is a database, a variety of technologies can be used to populate the database.
In the OLAP industry ROLAP is usually perceived as being able to scale for large data volumes but suffering from slower query performance as opposed toMOLAP. TheOLAP Survey[usurped], the largest independent survey across all major OLAP products, being conducted for 6 years (2001 to 2006) have consistently found that companies using ROLAP report slower performance than those using MOLAP even when data volumes were taken into consideration.
However, as with any survey there are a number of subtle issues that must be taken into account when interpreting the results.
Some companies select ROLAP because they intend to re-use existing relational database tables—these tables will frequently not be optimally designed for OLAP use. The superior flexibility of ROLAP tools allows this less-than-optimal design to work, but performance suffers.MOLAPtools in contrast would force the data to be re-loaded into an optimal OLAP design.
The undesirable trade-off between additionalETLcost and slow query performance has ensured that most commercial OLAP tools now use a "Hybrid OLAP" (HOLAP) approach, which allows the model designer to decide which portion of the data will be stored inMOLAPand which portion in ROLAP.
There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage.[15]For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data. HOLAP addresses the shortcomings ofMOLAPandROLAPby combining the capabilities of both approaches. HOLAP tools can utilize both pre-calculated cubes and relational data sources.
In this mode HOLAP storesaggregationsinMOLAPfor fast query performance, and detailed data inROLAPto optimize time of cubeprocessing.
In this mode HOLAP stores some slice of data, usually the more recent one (i.e. sliced by Time dimension) inMOLAPfor fast query performance, and older data inROLAP. Moreover, we can store some dices inMOLAPand others inROLAP, leveraging the fact that in a large cuboid, there will be dense and sparse subregions.[16]
The first product to provide HOLAP storage wasHolos, but the technology also became available in other commercial products such asMicrosoft Analysis Services,Oracle Database OLAP Option,MicroStrategyandSAP AGBI Accelerator. The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP. For example, a HOLAP server may store large volumes of detailed data in a relational database, while aggregations are kept in a separate MOLAP store. The Microsoft SQL Server 7.0 OLAP Services supports a hybrid OLAP server
Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers.
The following acronyms are also sometimes used, although they are not as widespread as the ones above:
Unlikerelational databases, which had SQL as the standard query language, and widespreadAPIssuch asODBC,JDBCandOLEDB, there was no such unification in the OLAP world for a long time. The first real standard API wasOLE DB for OLAPspecification fromMicrosoftwhich appeared in 1997 and introduced theMDXquery language. Several OLAP vendors – both server and client – adopted it. In 2001 Microsoft andHyperionannounced theXML for Analysisspecification, which was endorsed by most of the OLAP vendors. Since this also used MDX as a query language, MDX became the de facto standard.[26]Since September-2011LINQcan be used to querySSASOLAP cubes from Microsoft .NET.[27]
The first product that performed OLAP queries wasExpress,which was released in 1970 (and acquired byOraclein 1995 from Information Resources).[28]However, the term did not appear until 1993 when it was coined byEdgar F. Codd, who has been described as "the father of the relational database". Codd's paper[1]resulted from a short consulting assignment which Codd undertook for former Arbor Software (laterHyperion Solutions, and in 2007 acquired by Oracle), as a sort of marketing coup.
The company had released its own OLAP product,Essbase, a year earlier. As a result, Codd's "twelve laws of online analytical processing" were explicit in their reference to Essbase. There was some ensuing controversy and when Computerworld learned that Codd was paid by Arbor, it retracted the article. The OLAP market experienced strong growth in the late 1990s with dozens of commercial products going into market. In 1998, Microsoft released its first OLAP Server –Microsoft Analysis Services, which drove wide adoption of OLAP technology and moved it into the mainstream.
OLAP clients include many spreadsheet programs like Excel, web application, SQL, dashboard tools, etc. Many clients support interactive data exploration where users select dimensions and measures of interest. Some dimensions are used as filters (for slicing and dicing the data) while others are selected as the axes of a pivot table or pivot chart. Users can also vary aggregation level (for drilling-down or rolling-up) the displayed view. Clients can also offer a variety of graphical widgets such as sliders, geographic maps, heat maps and more which can be grouped and coordinated as dashboards. An extensive list of clients appears in the visualization column of thecomparison of OLAP serverstable.
Below is a list of top OLAP vendors in 2006, with figures in millions ofUS Dollars.[29]
|
https://en.wikipedia.org/wiki/ROLAP
|
Relational transducersare a theoretical model for studying computer systems through the lens ofdatabase relations. This model extends thetransducer modelinformal language theory. They were first introduced in 1998 byAbiteboulet al for the study of electronic commerce applications.[1]The computation model treats the input and output as sequences of relations. The state of the transducer is a state of a database and transitions through the state machine can be thought of as updates to the database state. The model was inspired by the design ofactive databasesand motivated by a desire to be able to express business applications declaratively via logical formulas.
The relational transducer model has been applied to the study ofcomputer networkmanagement,[2]e-commerce platforms,[1][3]and coordination-free distributed systems.[4][5][6][7]
A relational transducer has a schema made up of five components: In, State, Out, DB, and Log. In and Out represent the inputs to the system from users and the outputs back to the users respectively. DB represents the contents of the database and State represents the information that the system remembers. The Log contains the important subset of the inputs and outputs.
The relational schemas of each component are disjoint except for Log which is a subset of In ∪ Out.
A relational transducer over a relational transducer schema is made up of three parts:
Models of computation extending on relational transducers have been developed including the Distributed Shared Relations model[8]for synchronous distributed systems and the Abstract State Machine Transducer model[3]for verification of transaction protocols.
|
https://en.wikipedia.org/wiki/Relational_transducer
|
Incomputing, asnowflake schemaorsnowflake modelis alogical arrangementof tables in amultidimensional databasesuch that theentity relationshipdiagram resembles asnowflakeshape. The snowflake schema is represented by centralizedfact tableswhich are connected to multipledimensions. "Snowflaking" is a method of normalizing the dimension tables in astar schema. When it is completely normalized along all the dimension tables, the resultant structure resembles a snowflake with thefact tablein the middle. The principle behind snowflaking is normalization of the dimension tables by removing low cardinality attributes and forming separate tables.[1]
The snowflake schema is similar to the star schema. However, in the snowflake schema, dimensions arenormalizedinto multiple related tables, whereas the star schema's dimensions are denormalized with each dimension represented by a single table. A complex snowflake shape emerges when the dimensions of a snowflake schema are elaborate, having multiple levels of relationships, and the child tables have multiple parent tables ("forks in the road").
Star and snowflake schemas are most commonly found in dimensionaldata warehousesanddata martswhere speed of data retrieval is more important than the efficiency of data manipulations. As such, the tables in these schemas are not normalized much, and are frequently designed at a level of normalization short ofthird normal form.[2]
Normalizationsplits up data to avoid redundancy (duplication) by moving commonly repeating groups of data into new tables. Normalization therefore tends to increase the number of tables that need to be joined in order to perform a given query, but reduces the space required to hold the data and the number of places where it needs to be updated if the data changes.
From a space storage point of view, dimensional tables are typically small compared to fact tables. This often negates the potential storage-space benefits of the snowflake schema as compared to the star schema. Example: One million sales transactions in 300 shops in 220 countries would result in 1,000,300 records in a star schema (1,000,000 records in the fact table and 300 records in the dimensional table where each country would be listed explicitly for each shop in that country). A more normalized snowflake schema with country keys referring to a country table would consist of the same 1,000,000 record fact table, a 300 record shop table with references to a country table with 220 records. In this case, the star schema, although further denormalized, would only reduce the number or records by a (negligible) ~0.02% (=[1,000,000+300] instead of [1,000,000+300+220])
Some database developers compromise by creating an underlying snowflake schema withviewsbuilt on top of it that perform many of the necessary joins to simulate a star schema. This provides the storage benefits achieved through the normalization of dimensions with the ease of querying that the star schema provides. The tradeoff is that requiring the server to perform the underlying joins automatically can result in a performance hit when querying as well as extra joins to tables that may not be necessary to fulfill certain queries.[citation needed]
The snowflake schema is in the same family as thestar schemalogical model. In fact, the star schema is considered a special case of the snowflake schema. The snowflake schema provides some advantages over the star schema in certain situations, including:
The primary disadvantage of the snowflake schema is that the additional levels of attribute normalization adds complexity to source query joins, when compared to thestar schema.
Snowflake schemas, in contrast to flat single table dimensions, have been heavily criticised. Their goal is assumed to be an efficient and compact storage of normalised data but this is at the significant cost of poor performance when browsing the joins required in this dimension.[4]This disadvantage may have reduced in the years since it was first recognized, owing to better query performance within the browsing tools.
The example schema shown to the right is a snowflaked version of the star schema example provided in thestar schemaarticle.
The following example query is the snowflake schema equivalent of the star schema example code which returns the total number of television units sold by brand and by country for 1997. Notice that the snowflake schema query requires many more joins than the star schema version in order to fulfill even a simple query. The benefit of using the snowflake schema in this example is that the storage requirements are lower since the snowflake schema eliminates many duplicate values from the dimensions themselves.
|
https://en.wikipedia.org/wiki/Snowflake_schema
|
Structured Query Language(SQL) (pronounced/ˌɛsˌkjuˈɛl/S-Q-L;or alternatively as/ˈsiːkwəl/"sequel")[4][5]is adomain-specific languageused to manage data, especially in arelational database management system(RDBMS). It is particularly useful in handlingstructured data, i.e., data incorporating relations among entities and variables.
Introduced in the 1970s, SQL offered two main advantages over older read–writeAPIssuch asISAMorVSAM. Firstly, it introduced the concept of accessing manyrecordswith one singlecommand. Secondly, it eliminates the need to specifyhowto reach a record, i.e., with or without anindex.
Originally based uponrelational algebraandtuple relational calculus, SQL consists of many types of statements,[6]which may be informally classed assublanguages, commonly:Data query Language(DQL),Data Definition Language(DDL),Data Control Language(DCL), andData Manipulation Language(DML).[7]
The scope of SQL includes data query, data manipulation (insert, update, and delete), data definition (schemacreation and modification), and data access control. Although SQL is essentially adeclarative language(4GL), it also includesproceduralelements.
SQL was one of the first commercial languages to useEdgar F. Codd'srelational model. The model was described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks".[8]Despite not entirely adhering tothe relational model as described by Codd, SQL became the most widely used database language.[9][10]
SQL became astandardof theAmerican National Standards Institute(ANSI) in 1986 and of theInternational Organization for Standardization(ISO) in 1987.[11]Since then, the standard has been revised multiple times to include a larger set of features and incorporate common extensions. Despite the existence of standards, virtually no implementations in existence adhere to it fully, and most SQL code requires at least some changes before being ported to differentdatabasesystems.
SQL was initially developed atIBMbyDonald D. ChamberlinandRaymond F. Boyceafter learning about the relational model fromEdgar F. Codd[12]in the early 1970s.[13]This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasirelational database management system,System R, which a group atIBM San Jose Research Laboratoryhad developed during the 1970s.[13]
Chamberlin and Boyce's first attempt at a relational database language was SQUARE (Specifying Queries in A Relational Environment), but it was difficult to use due to subscript/superscript notation. After moving to the San Jose Research Laboratory in 1973, they began work on a sequel to SQUARE.[12]The original name SEQUEL, which is widely regarded as a pun onQUEL, the query language ofIngres,[14]was later changed to SQL (dropping the vowels) because "SEQUEL" was atrademarkof the UK-basedHawker SiddeleyDynamics Engineering Limited company.[15]The label SQL later became the acronym for Structured Query Language.[16]
After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype, includingSystem/38,SQL/DS, andIBM Db2, which were commercially available in 1979, 1981, and 1983, respectively.[17]
In the late 1970s, Relational Software, Inc. (nowOracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-basedRDBMSwith aspirations of selling it to theU.S. Navy,Central Intelligence Agency, and otherU.S. governmentagencies. In June 1979, Relational Software introduced one of the first commercially available implementations of SQL,OracleV2 (Version2) forVAXcomputers.
By 1986,ANSIandISOstandard groups officially adopted the standard "Database Language SQL" language definition. New versions of the standard were published in 1989, 1992, 1996, 1999, 2003, 2006, 2008, 2011,[12]2016 and most recently, 2023.[18]
SQL implementations are incompatible between vendors and do not necessarily completely follow standards. In particular, date and time syntax, string concatenation,NULLs, and comparisoncase sensitivityvary from vendor to vendor.PostgreSQL[19]andMimer SQL[20]strive for standards compliance, though PostgreSQL does not adhere to the standard in all cases. For example, the folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard,[21]which says that unquoted names should be folded to upper case.[22]Thus, according to the standard,Fooshould be equivalent toFOO, notfoo.
Popular implementations of SQL commonly omit support for basic features of Standard SQL, such as theDATEorTIMEdata types. The most obvious such examples, and incidentally the most popular commercial and proprietary SQL DBMSs, are Oracle (whoseDATEbehaves asDATETIME,[23][24]and lacks aTIMEtype)[25]and MS SQL Server (before the 2008 version). As a result, SQL code can rarely be ported between database systems without modifications.
Several reasons for the lack of portability between database systems include:
SQL was adopted as a standard by the ANSI in 1986 as SQL-86[27]and the ISO in 1987.[11]It is maintained byISO/IEC JTC 1, Information technology, Subcommittee SC 32, Data management and interchange.
Until 1996, theNational Institute of Standards and Technology(NIST) data-management standards program certified SQL DBMS compliance with the SQL standard. Vendors now self-certify the compliance of their products.[28]
The original standard declared that the official pronunciation for "SQL" was aninitialism:/ˌɛsˌkjuːˈɛl/("ess cue el").[9]Regardless, many English-speaking database professionals (including Donald Chamberlin himself[29]) use theacronym-like pronunciation of/ˈsiːkwəl/("sequel"),[30]mirroring the language's prerelease development name, "SEQUEL".[13][15][29]The SQL standard has gone through a number of revisions:
The standard is commonly denoted by the pattern:ISO/IEC 9075-n:yyyy Part n: title, or, as a shortcut,ISO/IEC 9075. Interested parties may purchase the standards documents from ISO,[35]IEC, or ANSI. Some old drafts are freely available.[36][37]
ISO/IEC 9075is complemented byISO/IEC 13249: SQL Multimedia and Application Packagesand someTechnical reports.
The SQL language is subdivided into several language elements, including:
SQL is designed for a specific purpose: to querydatacontained in arelational database. SQL is aset-based,declarative programming language, not animperative programming languagelikeCorBASIC. However, extensions to Standard SQL addprocedural programming languagefunctionality, such as control-of-flow constructs.
In addition to the standardSQL/PSMextensions and proprietary SQL extensions, procedural andobject-orientedprogrammability is available on many SQL platforms via DBMS integration with other languages. The SQL standard definesSQL/JRTextensions (SQL Routines and Types for the Java Programming Language) to supportJavacode in SQL databases.Microsoft SQL Server 2005uses theSQLCLR(SQL Server Common Language Runtime) to host managed.NETassemblies in thedatabase, while prior versions of SQL Server were restricted to unmanaged extended stored procedures primarily written in C.PostgreSQLlets users write functions in a wide variety of languages—includingPerl,Python,Tcl,JavaScript(PL/V8) and C.[39]
A distinction should be made between alternatives to SQL as a language, and alternatives to the relational model itself. Below are proposed relational alternatives to the SQL language. Seenavigational databaseandNoSQLfor alternatives to the relational model.
Distributed Relational Database Architecture(DRDA) was designed by a workgroup within IBM from 1988 to 1994. DRDA enables network-connected relational databases to cooperate to fulfill SQL requests.[41][42]
An interactive user or program can issue SQL statements to a local RDB and receive tables of data and status indicators in reply from remote RDBs. SQL statements can also be compiled and stored in remote RDBs as packages and then invoked by package name. This is important for the efficient operation of application programs that issue complex, high-frequency queries. It is especially important when the tables to be accessed are located in remote systems.
The messages, protocols, and structural components of DRDA are defined by theDistributed Data Management Architecture. Distributed SQL processing ala DRDA is distinctive from contemporarydistributed SQLdatabases.
SQL deviates in several ways from its theoretical foundation, the relational model and its tuple calculus. In that model, a table is asetof tuples, while in SQL, tables and query results arelistsof rows; the same row may occur multiple times, and the order of rows can be employed in queries (e.g., in the LIMIT clause).
Critics argue that SQL should be replaced with a language that returns strictly to the original foundation: for example, seeThe Third Manifestoby Hugh Darwen and C.J. Date (2006,ISBN0-321-39942-0).
Early specifications did not support major features, such as primary keys. Result sets could not be named, and subqueries had not been defined. These were added in 1992.[12]
The lack ofsum typeshas been described as a roadblock to full use of SQL's user-defined types. JSON support, for example, needed to be added by a new standard in 2016.[43]
The concept ofNullis the subject of somedebate. The Null marker indicates the absence of a value, and is distinct from a value of 0 for an integer column or an empty string for a text column. The concept of Nulls enforces the3-valued-logic in SQL, which is a concrete implementation of the general3-valued logic.[12]
Another popular criticism is that it allows duplicate rows, making integration with languages such asPython, whose data types might make accurately representing the data difficult,[12]in terms of parsing and by the absence of modularity. This is usually avoided by declaring a primary key, or a unique constraint, with one or more columns that uniquely identify a row in the table.
In a sense similar toobject–relational impedance mismatch, a mismatch occurs between the declarative SQL language and the procedural languages in which SQL is typically embedded.[citation needed]
The SQL standard defines three kinds ofdata types(chapter 4.1.1 of SQL/Foundation):
Constructed typesare one of ARRAY, MULTISET, REF(erence), or ROW.User-defined typesare comparable to classes in object-oriented language with their own constructors, observers, mutators, methods, inheritance, overloading, overwriting, interfaces, and so on.Predefined data typesare intrinsically supported by the implementation.
|
https://en.wikipedia.org/wiki/SQL
|
Incomputing, thestar schemaorstar modelis the simplest style ofdata martschemaand is the approach most widely used to develop data warehouses and dimensional data marts.[1]The star schema consists of one or morefact tablesreferencing any number ofdimension tables. The star schema is an important special case of thesnowflake schema, and is more effective for handling simpler queries.[2]
The star schema gets its name from thephysical model's[3]resemblance to astar shapewith a fact table at its center and the dimension tables surrounding it representing the star's points.
The star schema separates business process data into facts, which hold the measurable, quantitative data about a business, and dimensions which are descriptive attributes related to fact data. Examples of fact data include sales price, sale quantity, and time, distance, speed and weight measurements. Related dimension attribute examples include product models, product colors, product sizes, geographic locations, and salesperson names.
A star schema that has many dimensions is sometimes called acentipede schema.[4]Having dimensions of only a few attributes, while simpler to maintain, results in queries with many table joins and makes the star schema less easy to use.
Fact tables record measurements or metrics for a specific event.
Fact tables generally consist of numeric values, and foreign keys to dimensional data where descriptive information is kept.[4]Fact tables are designed to a low level of uniform detail (referred to as "granularity" or "grain"), meaning facts can record events at a very atomic level. This can result in the accumulation of a large number of records in a fact table over time. Fact tables are defined as one of three types:
Fact tables are generally assigned asurrogate keyto ensure each row can be uniquely identified.
This key is a simple primary key.
Dimension tables usually have a relatively small number of records compared to fact tables, but each record may have a very large number of attributes to describe the fact data. Dimensions can define a wide variety of characteristics, but some of the most common attributes defined by dimension tables include:
Dimension tables are generally assigned asurrogate primary key, usually a single-column integer data type, mapped to the combination of dimension attributes that form the natural key.
Star schemas aredenormalized, meaning the typical rules of normalization applied to transactional relational databases are relaxed during star-schema design and implementation. The benefits of star-schema denormalization are:
Consider a database of sales, perhaps from a store chain, classified by date, store and product. The image of the schema to the right is a star schema version of the sample schema provided in thesnowflake schemaarticle.
Fact_Salesis the fact table and there are three dimension tablesDim_Date,Dim_StoreandDim_Product.
Each dimension table has a primary key on itsIdcolumn, relating to one of the columns (viewed as rows in the example schema) of theFact_Salestable's three-column (compound) primary key (Date_Id,Store_Id,Product_Id). The non-primary keyUnits_Soldcolumn of the fact table in this example represents a measure or metric that can be used in calculations and analysis. The non-primary key columns of the dimension tables represent additional attributes of the dimensions (such as theYearof theDim_Datedimension).
For example, the following query answers how many TV sets have been sold, for each brand and country, in 1997:
|
https://en.wikipedia.org/wiki/Star_schema
|
FOAF(an acronym offriend of a friend) is amachine-readableontologydescribingpersons, their activities and their relations to other people and objects. Anyone can use FOAF to describe themselves. FOAF allows groups of people to describesocial networkswithout the need for a centralised database.
FOAF is a descriptive vocabulary expressed using theResource Description Framework(RDF) and theWeb Ontology Language(OWL). Computers may use these FOAF profiles to find, for example, all people living in Europe, or to list all people both you and a friend of yours know.[1][2]This is accomplished by defining relationships between people. Each profile has a unique identifier (such as the person'se-mail addresses, internationaltelephone number,Facebookaccount name, aJabber ID, or aURIof the homepage or weblog of the person), which is used when defining these relationships.
The FOAF project, which defines and extends the vocabulary of a FOAF profile, was started in 2000 by Libby Miller and Dan Brickley. It can be considered the firstSocial Semantic Webapplication,[citation needed]in that it combinesRDFtechnology with 'social web' concerns.[clarification needed]
Tim Berners-Lee, in a 2007 essay,[3]redefined thesemantic webconcept into theGiant Global Graph(GGG), where relationships transcend networks and documents. He considers the GGG to be on equal ground with theInternetand theWorld Wide Web, stating that "I express my network in a FOAF file, and that is a start of the revolution."
FOAF is one of the key components of theWebIDspecifications, in particular for the WebID+TLS protocol, which was formerly known as FOAF+SSL.
Although it is a relatively simple use-case and standard, FOAF has had limited adoption on the web. For example, theLive JournalandDeadJournalblogging sites support FOAF profiles for all their members,[4]My Operacommunity supported FOAF profiles for members as well as groups. FOAF support is present onIdenti.ca,FriendFeed,WordPressandTypePadservices.[5]
Yandexblog search platform supports search over FOAF profile information.[6]Prominent client-side FOAF support was available inSafari[7]web browser before RSS support was removed in Safari 6 and in the Semantic Radar[8]plugin forFirefoxbrowser.Semantic MediaWiki, thesemantic annotationandlinked dataextension ofMediaWikisupports mapping properties to external ontologies, including FOAF which is enabled by default.
There are also modules or plugins to support FOAF profiles or FOAF+SSL authorization for programming languages,[9][10]as well as forcontent management systems.[11]
The following FOAF profile (written inTurtleformat) states that James Wales is the name of the person described here. His e-mail address, homepage and depiction areweb resources, which means that each can be described using RDF as well. He has Wikimedia as an interest, and knows Angela Beesley (which is the name of a 'Person' resource).
Paddington Edition
|
https://en.wikipedia.org/wiki/FOAF
|
Theclosed-world assumption(CWA), in aformal system of logicused forknowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to alogicalformalization of this assumption byRaymond Reiter.[1]The opposite of the closed-world assumption is theopen-world assumption(OWA), stating that lack of knowledge does not imply falsity. Decisions on CWA vs. OWA determine the understanding of the actual semantics of a conceptual expression with the same notations of concepts. A successfulformalization of natural language semanticsusually cannot avoid an explicit revelation of whether the implicit logical backgrounds are based on CWA or OWA.
Negation as failureis related to the closed-world assumption, as it amounts to believing false every predicate that cannot be proved to be true.
In the context ofknowledge management, the closed-world assumption is used in at least two situations: (1) when the knowledge base is known to be complete (e.g., a corporate database containing records for every employee), and (2) when the knowledge base is known to be incomplete but a "best" definite answer must be derived from incomplete information. For example, if adatabasecontains the following table reporting editors who have worked on a given article, a query on the people not having edited the article on Formal Logic is usually expected to return "Sarah Johnson".
In the closed-world assumption, the table is assumed to becomplete(it lists all editor–article relationships), and Sarah Johnson is the only editor who has not edited the article on Formal Logic. In contrast, with the open-world assumption the table is not assumed to contain all editor–article tuples, and the answer to who has not edited the Formal Logic article is unknown. There is an unknown number of editors not listed in the table, and an unknown number of articles edited by Sarah Johnson that are also not listed in the table.
The first formalization of the closed-world assumption informal logicconsists in adding to the knowledge base the negation of the literals that are not currentlyentailedby it. The result of this addition is alwaysconsistentif the knowledge base is inHorn form, but is not guaranteed to be consistent otherwise. For example, the knowledge base
entails neitherEnglish(Fred){\displaystyle English(Fred)}norIrish(Fred){\displaystyle Irish(Fred)}.
Adding the negation of these two literals to the knowledge base leads to
which is inconsistent. In other words, this formalization of the closed-world assumption sometimes turns a consistent knowledge base into an inconsistent one. The closed-world assumption does not introduce an inconsistency on a knowledge baseK{\displaystyle K}exactly when the intersection of allHerbrand modelsofK{\displaystyle K}is also a model ofK{\displaystyle K}; in the propositional case, this condition is equivalent toK{\displaystyle K}having a single minimal model, where a model is minimal if no other model has a subset of variables assigned to true.
Alternative formalizations not suffering from this problem have been proposed. In the following description, the considered knowledge baseK{\displaystyle K}is assumed to be propositional. In all cases, the formalization of the closed-world assumption is based on adding toK{\displaystyle K}the negation of the formulae that are "free for negation" forK{\displaystyle K}, i.e., the formulae that can be assumed to be false. In other words, the closed-world assumption applied to a knowledge baseK{\displaystyle K}generates the knowledge base
The setF{\displaystyle F}of formulae that are free for negation inK{\displaystyle K}can be defined in different ways, leading to different formalizations of the closed-world assumption. The following are the definitions off{\displaystyle f}being free for negation in the various formalizations.
The ECWA and the formalism ofcircumscriptioncoincide on propositional theories.[5][6]The complexity of query answering (checking whether a formula is entailed by another one under the closed-world assumption) is typically in the second level of thepolynomial hierarchyfor general formulae, and ranges fromPtocoNPforHorn formulae. Checking whether the original closed-world assumption introduces an inconsistency requires at most a logarithmic number of calls to anNP oracle; however, the exact complexity of this problem is not currently known.[7]
In situations where it is not possible to assume a closed world for all predicates, yet some of them are known to be closed, thepartial-closed world assumptioncan be used. This regime considers knowledge bases generally to be open, i.e., potentially incomplete, yet allows to use completeness assertions to specify parts of the knowledge base that are closed.[8]
The language of logic programs withstrong negationallows us to postulate the closed-world assumption for some statements and leave the other statements in the realm of the open-world assumption.[9]An intermediate ground between OWA and CWA is provided by thepartial-closed world assumption(PCWA). Under the PCWA, the knowledge base is generally treated under open-world semantics, yet it is possible to assert parts that should be treated under closed-world semantics, via completeness assertions. The PCWA is especially needed for situations where the CWA is not applicable due to an open domain, yet the OWA is too credulous in allowing anything to be possibly true.[10][11]
|
https://en.wikipedia.org/wiki/Open-world_assumption
|
Thebag-of-words(BoW)modelis a model of text which uses an unordered collection (a "bag") of words. It is used innatural language processingandinformation retrieval(IR). It disregardsword order(and thus most of syntax or grammar) but capturesmultiplicity.
The bag-of-words model is commonly used in methods ofdocument classificationwhere, for example, the (frequency of) occurrence of each word is used as afeaturefor training aclassifier.[1]It has also beenused for computer vision.[2]
An early reference to "bag of words" in a linguistic context can be found inZellig Harris's 1954 article onDistributional Structure.[3]
The following models a text document using bag-of-words. Here are two simple text documents:
Based on these two text documents, a list is constructed as follows for each document:
Representing each bag-of-words as aJSON object, and attributing to the respectiveJavaScriptvariable:
Each key is the word, and each value is the number of occurrences of that word in the given text document.
The order of elements is free, so, for example{"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1}is also equivalent toBoW1. It is also what we expect from a strictJSON objectrepresentation.
Note: if another document is like a union of these two,
its JavaScript representation will be:
So, as we see in thebag algebra, the "union" of two documents in the bags-of-words representation is, formally, thedisjoint union, summing the multiplicities of each element.
The BoW representation of a text removes all word ordering. For example, the BoW representation of "man bites dog" and "dog bites man" are the same, so any algorithm that operates with a BoW representation of text must treat them in the same way. Despite this lack of syntax or grammar, BoW representation is fast and may be sufficient for simple tasks that do not require word order. For instance, fordocument classification, if the words "stocks" "trade" "investors" appears multiple times, then the text is likely a financial report, even though it would be insufficient to distinguish between
Yesterday, investors were rallying, but today, they are retreating.
and
Yesterday, investors were retreating, but today, they are rallying.
and so the BoW representation would be insufficient to determine the detailed meaning of the document.
Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, ortf–idf. Additionally, for the specific purpose of classification,supervisedalternatives have been developed to account for the class label of a document.[4]Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in theWEKAmachine learning software system).
A common alternative to using dictionaries is thehashing trick, where words are mapped directly to indices with a hashing function.[5]Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets[clarification needed]. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
|
https://en.wikipedia.org/wiki/Bag_of_words_model
|
Aviation Information Data Exchange(AIDX) is the globalXMLmessaging standard for exchanging flight data betweenairlines,airports, and any third party consuming the data. It is endorsed as a recommended standard by theInternational Air Transport Association(IATA), and theAirports Council International(ACI).
The development of AIDX began in 2005 and launched in October 2008 as a combined effort of over 80 airlines, airports and vendors. To date, it consists of 180 distinct data elements, includingflight identification, operational times, disruption details, resource requirement, passenger, baggage, fuel and cargo statistics, andaircraftdetails.[1]The goal of the project was to standardize information exchange and tackle problems of disruption for a variety ofuse cases.
This aircraft-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/AIDX
|
This is a list of notableXMLmarkup languages.
|
https://en.wikipedia.org/wiki/List_of_XML_markup_languages
|
This is a list of notableXML schemasin use on the Internet sorted by purpose. XML schemas can be used to create XML documents for a wide range of purposes such as syndication, general exchange, and storage of data in a standard format.
|
https://en.wikipedia.org/wiki/List_of_types_of_XML_schemas
|
Incomputer science,extensible programmingis a style of computer programming that focuses on mechanisms to extend theprogramming language,compiler, andruntime system(environment). Extensible programming languages, supporting this style of programming, were an active area of work in the 1960s, but the movement was marginalized in the 1970s.[1]Extensible programming has become a topic of renewed interest in the 21st century.[2]
The first paper usually[1][3]associated with the extensible programming language movement is M.Douglas McIlroy's 1960 paper onmacrosforhigh-level programming languages.[4]Another early description of the principle of extensibility occurs in Brooker and Morris's 1960 paper on thecompiler-compiler.[5]The peak of the movement was marked by two academic symposia, in 1969 and 1971.[6][7]By 1975, a survey article on the movement by Thomas A. Standish[1]was essentially a post mortem. TheForthwas an exception, but it went essentially unnoticed.
As typically envisioned, an extensible language consisted of a base language providing elementary computing facilities, and ametalanguageable to modify the base language. A program then consisted of metalanguage modifications and code in the modified base language.
The most prominent language-extension technique used in the movement was macro definition. Grammar modification was also closely associated with the movement, resulting in the eventual development ofadaptive grammarformalisms. TheLisplanguage community remained separate from the extensible language community, apparently because, as one researcher observed,
any programming language in which programs and data are essentially interchangeable can be regarded as an extendible [sic] language. ... this can be seen very easily from the fact that Lisp has been used as an extendible language for years.[8]
At the 1969 conference,Simulawas presented as an extensible language.
Standish described three classes of language extension, which he namedparaphrase,orthophrase, andmetaphrase(otherwise paraphrase and metaphrase beingtranslationterms).
Standish attributed the failure of the extensibility movement to the difficulty of programming successive extensions. A programmer might build a first shell of macros around a base language. Then, if a second shell of macros is built around that, any subsequent programmer must be intimately familiar with both the base language, and the first shell. A third shell would require familiarity with the base and both the first and second shells, and so on. Shielding a programmer from lower-level details is the intent of theabstractionmovement that supplanted the extensibility movement.
Despite the earlier presentation of Simula as extensible, by 1975, Standish's survey does not seem in practice to have included the newer abstraction-based technologies (though he used a very general definition of extensibility that technically could have included them). A 1978 history of programming abstraction from the invention of the computer until then, made no mention of macros, and gave no hint that the extensible languages movement had ever occurred.[9]Macros were tentatively admitted into the abstraction movement by the late 1980s (perhaps due to the advent ofhygienic macros), by being granted the pseudonymsyntactic abstractions.[10]
In the modern sense, a system that supports extensible programming will provideallof the features described below[citation needed].
This simply means that the source language(s) to be compiled must not be closed, fixed, or static. It must be possible to add new keywords, concepts, and structures to the source language(s). Languages which allow the addition of constructs with user defined syntax includeCoq,[11]Racket,Camlp4,OpenC++,Seed7,[12]Red,Rebol, andFelix. While it is acceptable for some fundamental and intrinsic language features to be immutable, the system must not rely solely on those language features. It must be possible to add new ones.
In extensible programming, a compiler is not a monolithic program that converts source code input into binary executable output. The compiler itself must be extensible to the point that it is really a collection of plugins that assist with the translation of source language input intoanything. For example, an extensible compiler will support the generation of object code, code documentation, re-formatted source code, or any other desired output. The architecture of the compiler must permit its users to "get inside" the compilation process and provide alternative processing tasks at every reasonable step in the compilation process.
For just the task of translating source code into something that can be executed on a computer, an extensible compiler should:
At runtime, extensible programming systems must permit languages to extend the set of operations that it permits. For example, if the system uses abyte-codeinterpreter, it must allow new byte-code values to be defined. As with extensible syntax, it is acceptable for there to be some (smallish) set of fundamental or intrinsic operations that are immutable. However, it must be possible to overload or augment those intrinsic operations so that new or additional behavior can be supported.
Extensible programming systems should regard programs as data to be processed. Those programs should be completely devoid of any kind of formatting information. The visual display and editing of programs to users should be a translation function, supported by the extensible compiler, that translates the program data into forms more suitable for viewing or editing. Naturally, this should be a two-way translation. This is important because it must be possible to easily process extensible programs in avarietyof ways. It is unacceptable for the only uses of source language input to be editing, viewing and translation to machine code. The arbitrary processing of programs is facilitated by de-coupling the source input from specifications of how it should be processed (formatted, stored, displayed, edited, etc.).
Extensible programming systems must support the debugging of programs using the constructs of the original source language regardless of the extensions or transformation the program has undergone in order to make it executable. Most notably, it cannot be assumed that the only way to display runtime data is instructuresorarrays. The debugger, or more correctly 'program inspector', must permit the display of runtime data in forms suitable to the source language. For example, if the language supports a data structure for abusiness processorwork flow, it must be possible for the debugger to display that data structure as afishbone chartor other form provided by a plugin.
|
https://en.wikipedia.org/wiki/Extensible_programming
|
This is acomparison ofdata serializationformats, various ways to convert complexobjectsto sequences ofbits. It does not includemarkup languagesused exclusively asdocument file formats.
A data mapping (the key is a data value):
a
nullnull.nullnull.boolnull.intnull.floatnull.decimalnull.timestampnull.stringnull.symbolnull.blobnull.clobnull.structnull.listnull.sexp
(true, null, -42.1e7, "A to Z")
Heterogeneous array:
(1 byte)
(Unpreserved lexical values format)
Unsigned skips the boolean flag.
(1 byte)
encoding is unspecified[19]
Length counts only octets between ':' and ','
|
https://en.wikipedia.org/wiki/Comparison_of_data-serialization_formats
|
Variousbinaryformats have been proposed as compact representations forXML(ExtensibleMarkup Language).
Using a binary XML format generally reduces the verbosity of XML documents thereby also reducing the cost of parsing,[1]but hinders the use of ordinary text editors and third-party tools to view and edit the document. There are several competing formats, but none has yet emerged as ade factostandard, although theWorld Wide Web ConsortiumadoptedEfficient XML Interchange(EXI) as a Recommendation on 10 March 2011.[2]
Binary XML is typically used in applications where the performance of standard XML is insufficient, but the ability to convert the document to and from a form (XML) whichiseasily viewed and edited is valued. Other advantages may include enablingrandom accessandindexingof XML documents.
The major challenge for binary XML is to create a single, widely adopted standard. TheInternational Organization for Standardization(ISO) and theInternational Telecommunication Union(ITU) published theFast Infosetstandard in 2007 and 2005, respectively. Another standard (ISO/IEC 23001-1), known as Binary MPEG format for XML (BiM), has been standardized by the ISO in 2001.BiMis used by manyEuropean Telecommunications Standards Institute(ETSI) standards for digital TV and mobile TV. TheOpen Geospatial Consortiumprovides a Binary XML Encoding Specification (currently aBest PracticePaper) optimized for geo-related data (GML) and also a benchmark to compare performance of Fast InfoSet, EXI, BXML anddeflateto encode/decodeAIXM.[3]
Alternatives to binary XML include using traditional file compression methods on XML documents (for examplegzip); or using an existing standard such asASN.1. Traditional compression methods, however, offer only the advantage of reduced file size, without the advantage of decreased parsing time or random access.ASN.1/PER forms the basis ofFast Infoset, which is one binary XML standard. There are also hybrid approaches (e.g.,VTD-XML) that attach a small index file to an XML document to eliminate the overhead of parsing.[4]
Projects and file formats using binary XML include:
Other projects that have functionality related to (or competing with) binary representations include:
|
https://en.wikipedia.org/wiki/Binary_XML
|
Extensible Binary Meta Language(EBML) is a generalizedfile formatfor any kind of data, aiming to be a binary equivalent toXML. It provides a basic framework for storing data in XML-like tags. It was originally designed as the framework language for theMatroskaaudio/video container format.[1][2][3]
EBML is not extensible in the same way that XML is, as theXML schema(e.g.,DTD) must be known in advance.
Thismarkup languagearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/EBML
|
WAP Binary XML(WBXML) is a binary representation ofXML. It was developed by theWAP Forumand since 2002 is maintained by theOpen Mobile Allianceas a standard to allow XML documents to be transmitted in a compact manner over mobile networks and proposed as an addition to theWorld Wide Web Consortium'sWireless Application Protocolfamily of standards. TheMIMEmedia typeapplication/vnd.wap.wbxml has been defined for documents that use WBXML.
WBXML is used by a number ofmobile phones. Usage includesExchange ActiveSyncfor synchronizing device settings, address book, calendar, notes and emails,SyncMLfor transmitting address book and calendar data,Wireless Markup Language,Wireless Village,OMA DRMfor its rights language andOver-the-air programmingfor sending network settings to a phone.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/WBXML
|
TheXML Protocol("XMLP") is a standard being developed by theW3CXML Protocol Working Groupto the following guidelines, outlined in the group'scharter:
Further, the protocol developed must meet the following requirements, as per the working group's charter:
Thisstandards- ormeasurement-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/XML_Protocol
|
Search engine indexingis the collecting,parsing, and storing of data to facilitate fast and accurateinformation retrieval. Index design incorporates interdisciplinary concepts fromlinguistics,cognitive psychology, mathematics,informatics, andcomputer science. An alternate name for the process, in the context ofsearch enginesdesigned to findweb pageson the Internet, isweb indexing.
Popular search engines focus on thefull-textindexing of online,natural languagedocuments.[1]Media typessuch as pictures, video, audio,[2]and graphics[3]are also searchable.
Meta search enginesreuse the indices of other services and do not store a local index whereas cache-based search engines permanently store the index along with thecorpus. Unlike full-text indices, partial-text services restrict the depth indexed to reduce index size. Larger services typically perform indexing at a predetermined time interval due to the required time and processing costs, whileagent-based search engines index inreal time.
The purpose of storing an index is to optimize speed and performance in findingrelevantdocuments for a search query. Without an index, the search engine wouldscanevery document in thecorpus, which would require considerable time and computing power. For example, while an index of 10,000 documents can be queried within milliseconds, a sequential scan of every word in 10,000 large documents could take hours. The additionalcomputer storagerequired to store the index, as well as the considerable increase in the time required for an update to take place, are traded off for the time saved during information retrieval.
Major factors in designing a search engine's architecture include:
Search engine architectures vary in the way indexing is performed and in methods of index storage to meet the various design factors.
A major challenge in the design of search engines is the management of serial computing processes. There are many opportunities forrace conditionsand coherent faults. For example, a new document is added to the corpus and the index must be updated, but the index simultaneously needs to continue responding to search queries. This is a collision between two competing tasks. Consider that authors are producers of information, and aweb crawleris the consumer of this information, grabbing the text and storing it in a cache (orcorpus). The forward index is the consumer of the information produced by the corpus, and the inverted index is the consumer of information produced by the forward index. This is commonly referred to as aproducer-consumer model. The indexer is the producer of searchable information and users are the consumers that need to search. The challenge is magnified when working with distributed storage and distributed processing. In an effort to scale with larger amounts of indexed information, the search engine's architecture may involvedistributed computing, where the search engine consists of several machines operating in unison. This increases the possibilities for incoherency and makes it more difficult to maintain a fully synchronized, distributed, parallel architecture.[13]
Many search engines incorporate aninverted indexwhen evaluating asearch queryto quickly locate documents containing the words in a query and then rank these documents by relevance. Because the inverted index stores a list of the documents containing each word, the search engine can use directaccessto find the documents associated with each word in the query in order to retrieve the matching documents quickly. The following is a simplified illustration of an inverted index:
This index can only determine whether a word exists within a particular document, since it stores no information regarding the frequency and position of the word; it is therefore considered to be aBooleanindex. Such an index determines which documents match a query but does not rank matched documents. In some designs the index includes additional information such as the frequency of each word in each document or the positions of a word in each document.[14]Position information enables the search algorithm to identify word proximity to support searching for phrases; frequency can be used to help in ranking the relevance of documents to the query. Such topics are the central research focus ofinformation retrieval.
The inverted index is asparse matrix, since not all words are present in each document. To reducecomputer storagememory requirements, it is stored differently from a two dimensionalarray. The index is similar to theterm document matricesemployed bylatent semantic analysis. The inverted index can be considered a form of a hash table. In some cases the index is a form of abinary tree, which requires additional storage but may reduce the lookup time. In larger indices the architecture is typically adistributed hash table.[15]
For phrase searching, a specialized form of an inverted index called a positional index is used. A positional index not only stores the ID of the document containing the token but also the exact position(s) of the token within the document in thepostings list. The occurrences of the phrase specified in the query are retrieved by navigating these postings list and identifying the indexes at which the desired terms occur in the expected order (the same as the order in the phrase). So if we are searching for occurrence of the phrase "First Witch", we would:
The postings lists can be navigated using a binary search in order to minimize the time complexity of this procedure.[16]
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing,[17]where a merge identifies the document or documents to be added or updated and then parses each document into words. For technical accuracy, a merge conflates newly indexed documents, typically residing in virtual memory, with the index cache residing on one or more computer hard drives.
After parsing, the indexer adds the referenced document to the document list for the appropriate words. In a larger search engine, the process of finding each word in the inverted index (in order to report that it occurred within a document) may be too time consuming, and so this process is commonly split up into two parts, the development of a forward index and a process which sorts the contents of the forward index into the inverted index. The inverted index is so named because it is an inversion of the forward index.
The forward index stores a list of words for each document. The following is a simplified form of the forward index:
The rationale behind developing a forward index is that as documents are parsed, it is better to intermediately store the words per document. The delineation enables asynchronous system processing, which partially circumvents the inverted index updatebottleneck.[18]The forward index issortedto transform it to an inverted index. The forward index is essentially a list of pairs consisting of a document and a word, collated by the document. Converting the forward index to an inverted index is only a matter of sorting the pairs by the words. In this regard, the inverted index is a word-sorted forward index.
Generating or maintaining a large-scale search engine index represents a significant storage and processing challenge. Many search engines utilize a form ofcompressionto reduce the size of the indices ondisk.[19]Consider the following scenario for a full text, Internet search engine.
Given this scenario, an uncompressed index (assuming a non-conflated, simple, index) for 2 billion web pages would need to store 500 billion word entries. At 1 byte per character, or 5 bytes per word, this would require 2500 gigabytes of storage space alone.[citation needed]This space requirement may be even larger for a fault-tolerant distributed storage architecture. Depending on the compression technique chosen, the index can be reduced to a fraction of this size. The tradeoff is the time and processing power required to perform compression and decompression.[citation needed]
Notably, large scale search engine designs incorporate the cost of storage as well as the costs of electricity to power the storage. Thus compression is a measure of cost.[citation needed]
Document parsing breaks apart the components (words) of a document or other form of media for insertion into the forward and inverted indices. The words found are calledtokens, and so, in the context of search engine indexing andnatural language processing, parsing is more commonly referred to astokenization. It is also sometimes calledword boundary disambiguation,tagging,text segmentation,content analysis, text analysis,text mining,concordancegeneration,speech segmentation,lexing, orlexical analysis. The terms 'indexing', 'parsing', and 'tokenization' are used interchangeably in corporate slang.
Natural language processing is the subject of continuous research and technological improvement. Tokenization presents many challenges in extracting the necessary information from documents for indexing to support quality searching. Tokenization for indexing involves multiple technologies, the implementation of which are commonly kept as corporate secrets.[citation needed]
Unlikeliteratehumans, computers do not understand the structure of a natural language document and cannot automatically recognize words and sentences. To a computer, a document is only a sequence of bytes. Computers do not 'know' that a space character separates words in a document. Instead, humans must program the computer to identify what constitutes an individual or distinct word referred to as a token. Such a program is commonly called atokenizerorparserorlexer. Many search engines, as well as other natural language processing software, incorporatespecialized programsfor parsing, such asYACCorLex.
During tokenization, the parser identifies sequences of characters that represent words and other elements, such as punctuation, which are represented by numeric codes, some of which are non-printing control characters. The parser can also identifyentitiessuch asemailaddresses, phone numbers, andURLs. When identifying each token, several characteristics may be stored, such as the token's case (upper, lower, mixed, proper), language or encoding, lexical category (part of speech, like 'noun' or 'verb'), position, sentence number, sentence position, length, and line number.
If the search engine supports multiple languages, a common initial step during tokenization is to identify each document's language; many of the subsequent steps are language dependent (such asstemmingandpart of speechtagging).Language recognitionis the process by which a computer program attempts to automatically identify, or categorize, thelanguageof a document. Other names for language recognition include language classification, language analysis, language identification, and language tagging. Automated language recognition is the subject of ongoing research innatural language processing. Finding which language the words belongs to may involve the use of alanguage recognition chart.
If the search engine supports multipledocument formats, documents must be prepared for tokenization. The challenge is that many document formats contain formatting information in addition to textual content. For example,HTMLdocuments contain HTML tags, which specify formatting information such as new line starts, bold emphasis, andfontsize orstyle. If the search engine were to ignore the difference between content and 'markup', extraneous information would be included in the index, leading to poor search results. Format analysis is the identification and handling of the formatting content embedded within documents which controls the way the document is rendered on a computer screen or interpreted by a software program. Format analysis is also referred to as structure analysis, format parsing, tag stripping, format stripping, text normalization, text cleaning and text preparation. The challenge of format analysis is further complicated by the intricacies of various file formats. Certain file formats are proprietary with very little information disclosed, while others are well documented. Common, well-documented file formats that many search engines support include:
Options for dealing with various formats include using a publicly available commercial parsing tool that is offered by the organization which developed, maintains, or owns the format, and writing a customparser.
Some search engines support inspection of files that are stored in acompressedor encrypted file format. When working with a compressed format, the indexer first decompresses the document; this step may result in one or more files, each of which must be indexed separately. Commonly supportedcompressed file formatsinclude:
Format analysis can involve quality improvement methods to avoid including 'bad information' in the index. Content can manipulate the formatting information to include additional content. Examples of abusing document formatting forspamdexing:
Some search engines incorporate section recognition, the identification of major parts of a document, prior to tokenization. Not all the documents in a corpus read like a well-written book, divided into organized chapters and pages. Many documents on theweb, such as newsletters and corporate reports, contain erroneous content and side-sections that do not contain primary material (that which the document is about). For example, articles on the Wikipedia website display a side menu with links to other web pages. Some file formats, like HTML or PDF, allow for content to be displayed in columns. Even though the content is displayed, or rendered, in different areas of the view, the raw markup content may store this information sequentially. Words that appear sequentially in the raw source content are indexed sequentially, even though these sentences and paragraphs are rendered in different parts of the computer screen. If search engines index this content as if it were normal content, the quality of the index and search quality may be degraded due to the mixed content and improper word proximity. Two primary problems are noted:
Section analysis may require the search engine to implement the rendering logic of each document, essentially an abstract representation of the actual document, and then index the representation instead. For example, some content on the Internet is rendered via JavaScript. If the search engine does not render the page and evaluate the JavaScript within the page, it would not 'see' this content in the same way and would index the document incorrectly. Given that some search engines do not bother with rendering issues, many web page designers avoid displaying content via JavaScript or use theNoscripttag to ensure that the web page is indexed properly. At the same time, this fact can also beexploitedto cause the search engine indexer to 'see' different content than the viewer.
Indexing often has to recognize theHTMLtags to organize priority. Indexing low priority to high margin to labels likestrongandlinkto optimize the order of priority if those labels are at the beginning of the text could not prove to be relevant. Some indexers likeGoogleandBingensure that thesearch enginedoes not take the large texts as relevant source due to strong type system compatibility.[22]
Meta tag indexing plays an important role in organizing and categorizing web content. Specific documents often contain embedded meta information such as author, keywords, description, and language. For HTML pages, themeta tagcontains keywords which are also included in the index. Earlier Internetsearch engine technologywould only index the keywords in the meta tags for the forward index; the full document would not be parsed. At that time full-text indexing was not as well established, nor wascomputer hardwareable to support such technology. The design of the HTML markup language initially included support for meta tags for the very purpose of being properly and easily indexed, without requiring tokenization.[23]
As the Internet grew through the 1990s, manybrick-and-mortar corporationswent 'online' and established corporate websites. The keywords used to describe webpages (many of which were corporate-oriented webpages similar to product brochures) changed from descriptive to marketing-oriented keywords designed to drive sales by placing the webpage high in the search results for specific search queries. The fact that these keywords were subjectively specified was leading tospamdexing, which drove many search engines to adopt full-text indexing technologies in the 1990s. Search engine designers and companies could only place so many 'marketing keywords' into the content of a webpage before draining it of all interesting and useful information. Given that conflict of interest with the business goal of designing user-oriented websites which were 'sticky', thecustomer lifetime valueequation was changed to incorporate more useful content into the website in hopes of retaining the visitor. In this sense, full-text indexing was more objective and increased the quality of search engine results, as it was one more step away from subjective control of search engine result placement, which in turn furthered research of full-text indexing technologies.
Indesktop search, many solutions incorporate meta tags to provide a way for authors to further customize how the search engine will index content from various files that is not evident from the file content. Desktop search is more under the control of the user, while Internet search engines must focus more on the full text index.
|
https://en.wikipedia.org/wiki/Index_(search_engine)
|
Database management systemsprovide multiple types ofindexesto improve performance and data integrity across diverse applications. Index types includeb-trees,bitmaps, andr-trees.
In database management systems, areverse key indexstrategy reverses thekeyvalue before entering it in theindex.[1]E.g., the value 24538 becomes 83542 in the index. Reversing the key value is particularly useful for indexing data such assequence numbers, where each new key value is greater than the prior value, i.e., values monotonically increase. Reverse key indexes have become particularly important in high volumetransaction processing systemsbecause they reducecontentionfor indexblocks.
Reversed key indexes useb-treestructures, but preprocess key values before inserting them. Simplifying, b-trees place similar values on a single index block, e.g., storing 24538 on the same block as 24539. This makes them efficient both for looking up a specific value and for finding values within a range. However, if the application inserts values in sequence, each insert must have access to the newest block in the index in order to add the new value. If many users attempt to insert at the same time, they all must write to that block and have to get in line, slowing down the application. This is particularly a problem inclustered databases, which may require the block to be copied from one computer's memory to another's to allow the next user to perform their insert.
Reversing the key spreads similar new values across the entire index instead of concentrating them in any one leaf block. This means that 24538 appears on the same block as 14538 while 24539 goes to a different block, eliminating this cause ofcontention. (Since 14538 would have been created long before 24538, their inserts don't interfere with each other.)
Reverse indexes are just as efficient as unreversed indexes for finding specific values, although they aren't helpful for range queries. Range queries are uncommon for artificial values such as sequence numbers. When searching the index, the query processor simply reverses the search target before looking it up.
Typically, applications delete data that is older on average before deleting newer data. Thus, data with lower sequence numbers generally go before those with higher values. As time passes, in standardb-trees, index blocks for lower values end up containing few values, with a commensurate increase in unused space, referred to as "rot". Rot not only wastes space, but slows query speeds, because a smaller fraction of a rotten index's blocks fit in memory at any one time. In a b-tree, if 14538 gets deleted, its index space remains empty.In a reverse index, if 14538 goes before 24538 arrives, 24538 can reuse 14538's space.[citation needed]
|
https://en.wikipedia.org/wiki/Reverse_index
|
Attention Profiling Mark-up Language(APML) is anXML-basedmarkup languagefor documenting a person's interests and dislikes.
APML allows people to share their own personal attention profile in much the same way thatOPMLallows the exchange of reading lists between news readers. The idea behind APML is to compress all forms of attention data into a portable file format containing a description of the user's rated interests.
The APML Workgroup is tasked with maintaining and refining the APML specification. The APML Workgroup is made up of industry experts and leaders and was founded by Chris Saad and Ashley Angell.[1]The workgroup allows public recommendations and input, and actively evangelises the public's "Attention Rights". The workgroup also adheres to the principles of Media 2.0 Best Practices.[clarification needed]
Services that have adopted APML
|
https://en.wikipedia.org/wiki/Attention_Profiling_Mark-up_Language
|
Cold startincomputingrefers to a problem where a system or its part was created or restarted and is not working at its normal operation. The problem can be related to initialising internalobjectsor populatingcacheor starting up subsystems.
In a typicalweb servicesystem the problem occurs after restarting the server, and also when clearing the cache (e.g., after releasing new version). The first requests to the web service will cause significantly more load due to the server cache being populated, thebrowsercachebeing cleared, and new resources being requested. Other services like acaching proxyorweb acceleratorwill also need time to gather new resources and operate normally.
Similar problem occurs when creating instances in a hosted environment and instances incloud computingservices.[1]
Cold start (or cold boot) may also refer to a booting process of a singlecomputer(orvirtual machine).[2]In this caseservicesand other startupapplicationsare executed after reboot. The system is typically made available to the user even though startup operations are still performed and slow down other operations.
Another type of problem is when thedata modelof a particular system requires connections between objects. In that case new objects will not operate normally until those connections are made. This is well known problem withrecommender systems.[3][4]
In somemachine learningscenarios, with models where the training dataset is incrementally added to in time (e.g. inactive learning), cold start refers to training the model on the so far obtained labeled pool with new data added de novo, instead of training the model on new data with all its knowledge from previous trainings (warm start).[5]Unlike the previous mentioned instances, cold starting in these scenarios can yield better results of the model.
|
https://en.wikipedia.org/wiki/Cold_start_(computing)
|
Inpsycholinguistics, thecollaborative model(orconversational model) is a theory for explaining how speaking and understanding work in conversation, specifically how people in conversation coordinate to determine definitereferences.
The model was initially proposed in 1986 bypsycholinguistsHerb Clarkand Deanna Wilkes-Gibbs.[1]It asserts that conversation partners must act collaboratively to reach a mutual understanding – i.e. the speaker must tailor their utterances to better suit the listener, and the listener mustindicateto the speaker that they have understood.
In this ongoing process, both conversation partners must work together in order to establish what a given noun phrase is referring to. The referential process can be initiated by the speaker using one of at least six types ofnoun phrases: the elementary noun phrase, the episodic noun phrase, the installment noun phrase, the provisional noun phrase, the dummy noun phrase, and/or the proxy noun phrase.
Once this presentation is made, the listener must accept it either through presupposing acceptance (i.e. letting the speaker continue uninterrupted) or asserting acceptance (i.e. through a continuer such as "yes", okay", or a head nod). The speaker must then acknowledge this signal of acceptance. In this process, presentation and acceptance goes back and forth, and some utterances can simultaneously be both presentations and acceptances. This model also posits that conversationalists strive for minimum collaborative effort by making references based more on permanent properties than temporary properties and by refining perspective on referents through simplification and narrowing .
The collaborative model finds its roots inGrice'scooperative principleand fourGricean maxims, theories which prominently established the idea that conversation is a collaborative process between speaker and listener.
However, until the Clark & Wilkes-Gibbs study, the prevailing theory was the literary model (or autonomous model or traditional model). This model likened the process of a speaker establishing reference to an author writing a book to distant readers. In the literary model, the speaker is the one who retains complete control and responsibility over the course of referent determination. The listener, in this theory, simply hears and understands the definite description as if they were reading it and, if successful, figures out the identity of the referent on their own.
This autonomous view of reference establishment wasn't challenged until a paper by D.R. Olson was published in 1970.[2]It was then suggested that there very well could be a collaborative element in the process of establishing reference. Olson, while still holding to the literary model, suggested that speakers select the words they do based on context and what they believe the listener will understand.
Clark and Wilkes-Gibbs criticized the literary model in their 1986 paper; they asserted that the model failed to account for the dynamic nature of verbal conversations.
In the same paper, they proposed the Collaborative Model as an alternative. They believed this model was more able to explain the aforementioned features of conversation. They had conducted an experiment to support this theory and also to further determine how the acceptance process worked.
The experiment consisted of two participants seated at tables separated by an opaque screen. On the tables in front of each participant were a series ofTangramfigures arranged in different orders. One participant, called the director, was tasked with getting the other participant, called the matcher, to accurately match his configuration of figures through conversation alone. This process was to be repeated 5 additional times by the same individuals, playing the same roles.
The collaborative model they proposed allowed them to make several predictions about what would happen. They predicted that it would require many more words to establish reference the first time, as the participants would need to use non-standard noun phrases which would make it difficult to determine which figures were being talked about. However, they hypothesized that later references to the same figures would take fewer words and a shorter amount of time, because by this point definite reference would have been mutually established, and also because the subjects would be able to rely on established standard noun phrases.
The results of the study confirmed many of their beliefs, and outlined some of the processes of collaborative reference, including establishing the types of noun phrases used in presentation, and their frequency.
The following actions were observed in participants working towards mutual acceptance of a reference;
Grounding is the final stage in the collaborative process. The concept was proposed by Herbert H. Clark and Susan E. Brennan in 1991.[3]It comprises the collection of "mutual knowledge, mutual beliefs, and mutual assumptions" that is essential for communication between two people. Successful grounding in communication requires parties "to coordinate both the content and process".
The parties engaging in grounding exchange information over what they do or do not understand over the course of a communication and they will continue to clarify concepts until they have agreed on grounding criterion. There are generally two phases in grounding:
Subsequent studies affirmed many of Clark and Wilkes-Gibbs' theories. These included a study by Clark andMichael Schoberin 1989[4]that dealt with overhearers and contrasting how well they understand compared to direct addressees. In the literary model, overhearers would be expected to understand as well as addressees, while in the collaborative model, overhearers would be expected to do worse, since they are not part of the collaborative process and the speaker is not concerned with making sure anyone but the addressee understands.
The study conducted by the pair mimicked the Clark/Wilkes-Gibbs study, but included a silent overhearer as part of the process. The speaker and addressee were allowed to converse, while the overhearer attempted to arrange his figures according to what the speaker was saying. In different versions of this study, overhearers had access to a tape recording of the speaker's directions, while in another they simply all sat in the same room.
The study found that overhearers had significantly more difficulty than addressees in both experiments, therefore, according to the researchers, lending credence to the collaborative model.
The literary model described above still stands as a directly opposing viewpoint to the collaborative model. Subsequent studies also sought to point out weaknesses in the theory. One study, by Brown and Dell, took issue with the aspect of the theory that suggests that speakers have particular listeners in mind when determining reference. Instead, they suggested, speakers have generic listeners in mind. This egocentric theory proposed that people's estimates of another's knowledge are biased towards their own and that early syntactic choices may be made without regard to the addressees' needs, while beliefs about the addressees knowledge did not affect utterance choices until later on, usually in the form of repairs.
Another study, in 2002 by Barr and Keysar,[5]also criticized the particular listener view and partner-specific reference. In the experiment, addresses and speakers established definite references for a series of objects on a wall. Then, another speaker entered, using the same references. The theory was that, if the partner-specific view of establishing reference was correct, the addressee would be slower to identify objects(as measured by eye movement) out of confusion because the reference used had been established with another speaker. They found this not to be the case, in fact, reaction time was similar.
|
https://en.wikipedia.org/wiki/Collaborative_model
|
Collaborative search engines(CSE) areweb search enginesandenterprise searcheswithin company intranets that let users combine their efforts ininformation retrieval(IR) activities, share information resources collaboratively usingknowledge tags, and allow experts to guide less experienced people through their searches. Collaboration partners do so by providing query terms, collective tagging, adding comments or opinions, rating search results, and links clicked of former (successful) IR activities to users having the same or a relatedinformation need.
Collaborative search engines can be classified along several dimensions: intent (explicit and implicit) and synchronization,[1]depth of mediation,[2]task vs. trait,[3]division of labor, and sharing of knowledge.[4]
Implicit collaboration characterizesCollaborative filteringandrecommendation systemsin which the system infers similar information needs. I-Spy,[5]Jumper 2.0,Seeks, the Community Search Assistant,[6]the CSE of Burghardt et al.,[7]and the works of Longo et al.[8][9][10]all represent examples of implicit collaboration. Systems that fall under this category identify similar users, queries and links clicked automatically, and recommend related queries and links to the searchers.
Explicit collaboration means that users share an agreed-upon information need and work together toward that goal. For example, in a chat-like application, query terms and links clicked are automatically exchanged. The most prominent example of this class is SearchTogether[11]published in 2007. SearchTogether offers an interface that combines search results from standard search engines and a chat to exchange queries and links. PlayByPlay[12]takes a step further to support general purpose collaborative browsing tasks with an instant messaging functionality. Reddy et al.[13]follow a similar approach and compares two implementations of their CSE called MUSE and MUST. Reddy et al. focus on the role of communication required for efficient CSEs. Cerciamo[2]supports explicit collaboration by allowing one person to concentrate on finding promising groups of documents while having the other person make in-depth judgments of relevance on documents found by the first person.
However, in Papagelis et al.[14]terms are used differently: they combine explicitly shared links and implicitly collected browsing histories of users to a hybrid CSE.
Recent work in collaborative filtering and information retrieval has shown that sharing of search experiences among users having similar interests, typically called acommunity of practiceorcommunity of interest, reduces the effort put in by a given user in retrieving the exact information of interest.[15]
Collaborative search deployed within a community of practice deploys novel techniques for exploiting context during search by indexing and ranking search results based on the learned preferences of a community of users.[16]The users benefit by sharing information, experiences and awareness to personalize result-lists to reflect the preferences of the community as a whole. The community representing a group of users who share common interests, similar professions. The best known example is the open-source projectApexKB(previously known as Jumper 2.0).[17]
The depth of mediation refers to the degree that the CSE mediates search.[2]SearchTogether[11]is an example of UI-level mediation: users exchange query results and judgments of relevance, but the system does not distinguish among users when they run queries. PlayByPlay[12]is another example of UI-level mediation where all users have full and equal access to the instant messaging functionality without the system's coordination. Cerchiamo[2]and recommendation systems such as I-Spy[5]keep track of each person's search activity independently and use that information to affect their search results. These are examples of deeper algorithmic mediation.
This model classifies people's membership in groups based on the task at hand vs. long-term interests; these may be correlated with explicit and implicit collaboration.[3]
CSE systems started off on the desktop end, with the earliest ones being extensions or modifications to existing web browsers. GroupWeb[18]is a desktop web browser that offers a shared visual workspace for a group of users. SearchTogether[11]is a desktop application that combines search results from standard search engines and a chat interface for users to exchange queries and links. CoSense[19]supports sensemaking tasks in collaborative Web search by offering rich and interactive presentations of a group's search activities.
With the prevalence of mobile phones and tablets, CSEs are also taking advantage of these additional device modalities. CoSearch[20]is a system that supports co-located collaborative web search by leveraging extra mobile phones and mice. PlayByPlay[12]also supports collaborative browsing between mobile and desktop users.
Synchronous collaboration model enables different users to work toward the same goal together simultaneously, with each individual user having access to one another's progress in real-time. A typical example of the synchronous collaboration model is GroupWeb,[18]where users are made aware of what others are doing through features such as synchronous scrolling with pages, telepointers for enacting gestures, and group annotations that are attached to web pages.
Asynchronous collaboration models offer more flexibility toward when different users' different search processes are carried out while reducing the cognitive effort for later users to consume and build upon previous users' search results. SearchTogether,[11]for example, supports asynchronous collaboration functionalities by persisting previous users' chat logs, search queries, and web browsing histories so that the later users could quickly bring themselves up to speed.
The applications of CSEs are well-explored in both the academic community and industry. For example, GroupWeb[18]was used as a presentation tool for real-time distance education and conferences. ClassSearch[21]is deployed in middle-school classroom sessions to facilitate collaborative search activities in classrooms and study the space of co-located search pedagogies.
Search terms and links clicked that are shared among users reveal their interests, habits, social
relations and intentions.[22]In other words, CSEs put the privacy of the users at risk. Studies have shown that CSEs increase efficiency.[11][23][24][25]Unfortunately, by the lack of privacy enhancing technologies, a privacy aware user who wants to benefit from a CSE has to disclose their entire search log. (Note, even when explicitly sharing queries and links clicked, the whole (former) log is disclosed to any user that joins a search session). Thus, sophisticated mechanisms that allow on a more fine grained level which information is disclosed to whom are desirable.
As CSEs are a new technology just entering the market, identifying user privacy preferences and integratingPrivacy enhancing technologies(PETs) into collaborative search are in conflict. On the one hand, PETs have to meet user preferences, on the other hand, one cannot identify these preferences without using a CSE, i.e., implementing PETs into CSEs. Today, the only work addressing this problem comes from Burghardt et al.[26]They implemented a CSE with experts from the information system domain and derived the scope of possible privacy preferences in a user study with these experts. Results show that users define preferences referring to (i) their current context (e.g., being at work), (ii) the query content (e.g., users exclude topics from sharing), (iii) time constraints (e.g., do not publish the query X hours after the query has been issued, do not store longer than X days, do not share between working time), and that users intensively use the option to (iv) distinguish between different social groups when sharing information. Further, users require (v) anonymization and (vi) define reciprocal constraints, i.e., they refer to the behavior of other users, e.g., if a user would have shared the same query in turn.
|
https://en.wikipedia.org/wiki/Collaborative_search_engine
|
Customer engagementis an interaction between an external consumer/customer (eitherB2CorB2B) and an organization (company orbrand) through various online or offline channels.[citation needed]According to Hollebeek, Srivastava and Chen, customer engagement is "a customer’s motivationally driven, volitional investment of operant resources (including cognitive, emotional, behavioral, and social knowledge and skills), and operand resources (e.g., equipment) into brand interactions," which applies to online and offline engagement.[1]
Online customer engagement is qualitatively different from offline engagement as the nature of the customer's interactions with a brand, company and other customers differ on the internet. Discussion forums orblogs, for example, are spaces where people can communicate and socialize in ways that cannot be replicated by any offline interactive medium. Online customer engagement is a social phenomenon that became mainstream with the wide adoption of the internet in the late 1990s, which has expanded the technical developments in broadband speed, connectivity and social media. These factors enable customers to regularly engage in online communities revolving, directly or indirectly, around product categories and other consumption topics. This process often leads to positive engagement with the company or offering, as well as the behaviors associated with different degrees of customer engagement.[citation needed]
Marketingpractices aim to create, stimulate or influence customer behaviour, which placesconversionsinto a more strategic context and is premised on the understanding that a focus on maximising conversions can, in some circumstances, decrease the likelihood of repeat conversions.[2]Although customer advocacy has always been a goal for marketers, the rise of onlineuser-generated contenthas directly influenced levels of advocacy. Customer engagement targets long-term interactions, encouraging customer loyalty and advocacy through word-of-mouth. Although customer engagement marketing is consistent both online and offline, the internet is the basis for marketing efforts.[2]
In March 2006, theAdvertising Research Foundationannounced the first definition of customer engagement[3]as "turning on a prospect to a brand idea enhanced by the surrounding context." However, the ARF definition was criticized by some for being too broad.[4]The ARF,World Federation of Advertisers,[5]Various definitions have translated different aspects of customer engagement. Forrester Consulting's research in 2008, has defined customer engagement as "creating deep connections with customers that drive purchase decisions, interaction, and participation, over time". Studies by the Economist Intelligence Unit result in defining customer engagement as, "an intimate long-term relationship with the customer". Both of these concepts prescribe that customer engagement is attributed to a rich association formed with customers. With aspects of relationship marketing and service-dominant perspectives, customer engagement can be loosely defined as "consumers' proactive contributions in co-creating their personalized experiences and perceived value with organizations through active, explicit, and ongoing dialogue and interactions". The book,Best Digital Marketing Campaigns In The World, defines customer engagement as, "mutually beneficial relationships with a constantly growing community of online consumers". The various definitions of customer engagement are diversified by different perspectives and contexts of the engagement process. These are determined by the brand, product, or service, the audience profile, attitudes and behaviours, and messages and channels of communication that are used to interact with the customer.
Since 2009, a number of new definitions have been proposed in the literature. In 2011, the term was defined as "the level of a customer’s cognitive, emotional and behavioral investment in specific brand interactions," and identifies the three CE dimensions of immersion (cognitive), passion (emotional) and activation (behavioral).[6]It was also defined as "a psychological state that occurs by virtue of interactive, co-creative customer experiences with a particular agent/object (e.g. a brand)".[7]Researchers have based their work on customer engagement as a multi-dimensional construct, while also identifying that it is context-dependent. Engagement gets manifested in the various interactions that customers undertake, which in turn get shaped up by individual cultures.[8]The context is not limited to geographical context, but also includes the medium with which the user engages.[8]Moreover, customer engagement is the emotional involvement and psychological process in which both new and existing consumers become loyal to specific types of services or products. The degree to which customers pay attention to companies or products, as well as their participation in operations, is referred to as customer engagement.[9]
To effectively navigate customer engagement, businesses establish objectives that align with their organizational goals. Whether the aim is to enhance customer loyalty, drive revenue growth, or deliver personalized experiences, having a plan serves impactful engagement initiative. To optimize outcomes, businesses analyze customer interactions, identify areas for improvement, and iterate their strategies. The landscape of customer engagement is characterized by merging data-driven insights, innovative strategies, and a commitment to delivering outstanding customer experiences. By prioritizing customer engagement, businesses can cultivate long-lasting customer relationships, drive customer loyalty, and thrive in increasingly competitive markets.[citation needed]
Efforts to boost user engagement at any expense can lead to social media addiction for both service providers and users. Facebook and several other social media platforms have faced criticism for manipulating user emotions to enhance engagement, even if it is knowingly false content. Professor Hany Farid summarized Facebook’s approach, stating, “When you’re in the business of maximizing engagement, you’re not interested in truth."[10]Various other techniques used to increase engagement are also considered abusive. For example, FOMO (Fear of Missing Out),infinite scrolling, and incentives for users who frequently engage with the service.
Offline customer engagement predates online, but the latter is a qualitatively different social phenomenon, unlike any offline customer engagement that social theorists or marketers recognize. In the past, customer engagement has been generated irresolutely through television, radio, media, outdoor advertising, and various othertouchpointsideally during peak and/or high trafficked allocations. However, the only conclusive results of campaigns were sales and/or return on investment figures. The widespread adoption of the internet during the late 1990s has enhanced the processes of customer engagement, in particular, the way in which it can now be measured in different ways on different levels of engagement. It is a recentsocial phenomenonwhere people engage online in communities that do not necessarily revolve around a particular product but serve as meeting or networking places. This online engagement has brought about both the empowerment of consumers and the opportunity for businesses to engage with their target customers online. A 2011 market analysis revealed that 80% of online customers, after reading negative online reviews, report making alternate purchasing decisions, while 87% of consumers said a favorable review has confirmed their decision to go through with a purchase.[11]
The concept and practice of online customer engagement enables organisations to respond to the fundamental changes in customer behaviour that the internet has brought about,[12]as well as to the increasing ineffectiveness of the traditional 'interrupt and repeat', broadcast model of advertising. Due to the fragmentation and specialisation of media and audiences, as well as the proliferation of community- anduser-generated content, businesses are increasingly losing the power to dictate the communications agenda. Simultaneously, lowerswitching costs, the geographical widening of the market and the vast choice of content, services and products available online have weakened customer loyalty. Enhancing customers' firm- and market-related expertise has been shown to engage customers,[13]strengthen their loyalty,[14]and emotionally tie them more closely to a firm.[15]
Since the world has reached a population of over 3 billion internet users, it is conclusive that society's interactive culture is significantly influenced by technology. Connectivity is bringing consumers and organizations together, which makes it critical for companies to take advantage and focus on capturing the attention of and interacting with well-informed consumers in order to serve and satisfy them. Connecting with customers establishes exclusivity in their experience, which potentially will increase brand loyalty, and word of mouth, and provides businesses with valuable consumer analytics, insight, and retention. Customer engagement can come in the form of a view, an impression, a reach, a click, a comment, or a share, among many others. These are ways in which analytics and insights into customer engagement can now be measured on different levels, all of which are information that allows businesses to record and process results of customer engagement.
Taking into consideration the widespread information and connections for consumers, the way to develop penetrable customer engagement is to proactively connect with customers by listening. Listening will empower the consumer, give them control, and endorse a customer-centric two-way dialogue. This dialogue will redefine the role of the consumer as they no longer assume the end user role in the process. Instead of the traditional transaction and/or exchange, the concept becomes a process of partnership between organizations and consumers. Particularly since the internet has provided consumers with the accumulation of much diverse knowledge and understanding, consumers now have increasingly high expectations, developed stronger sensory perceptions, and hence have become more attracted to experiential values. Therefore, it would only be profitable for businesses to submit to the new criteria, to provide the opportunity for consumers to further immerse in the consumption experience. This experience will involve organizations and consumers sharing and exchanging information, which will generate increased awareness, interest, desire to purchase, retention, and loyalty among consumers, evolving an intimate relationship. Significantly, total openness and strengthening customer service is the selling point here for customers, to make them feel more involved rather than just a number. This will earn trust, engagement, and ultimately word of mouth through endless social circles. Essentially, it is a more dynamic and transparent concept ofcustomer relationship management (CRM).
The utilization of social media platforms has emerged as a modern way of improving customer engagement strategies. By curating content that resonates with the interests of customers, businesses cultivate authentic connections and communities online. Platforms such as Instagram and Twitter serve as useful tools for meaning dialog, enabling businesses to make lasting relationships with customers and amplify brand visibility online.
Customer engagement on Twitter is a form ofsocial powerand is usually measured with likes, replies and retweets.
A recent study[16]shows that retweets are more likely to contain positive content and address larger audiences using the first-personpronoun"we". Replies, on the other hand, are more likely to contain negative content and address individuals using the second-personpronoun"you" and the third-personpronouns"he" or "she". While users with less followers tend to engage ininterpersonalconversations to provoke customer engagement, influencers with many followers tend to post positive messages, often using the word "love" when addressing larger audiences.
Customer engagement marketing is necessitated by a combination of social, technological and market developments. Companies attempt to create an engaging dialogue with target consumers and stimulate their engagement with the given brand. Although this must take place both on and off-line, the internet is considered the primary method. Marketing begins with understanding the internal dynamics of these developments and the behaviour and engagement of consumers online.Consumer-generated mediaplays a significant role in the understanding and modeling of engagement.[17]The control Web 2.0 consumers have gained is quantified through 'old school' marketing performance metrics.[18]
The effectiveness of the traditional 'interrupt and repeat' model of advertising is decreasing, which has caused businesses to lose control of communications agendas.[19][20][21]In August 2006, McKinsey & Co published a report[22]which indicated that traditional TV advertising would decrease in effectiveness compared to previous decades.[19]As customer audiences have become smaller and more specialised, the fragmentation of media, audiences and the accompanying reduction of audience size[19]have reduced the effectiveness of the traditional top-down, mass, 'interrupt and repeat' advertising model. A Forrester Research's North American Consumer Technology Adoption Study[22]found that people in the 18-26 age group spend more time online than watching TV.[2][19]Furthermore, the Global Web Index reported that in 2021, YouTube beats any mainstream media platforms when it comes to monthly engagement.[citation needed]This is partly due to the fact that 51% of U.S. and U.K. consumers use YouTube for shopping and product research,[citation needed]a service that traditional media can't really provide.[citation needed]
In response to the fragmentation and increased amount of time spent online, marketers have also increased spending in online communication. ContextWeb analysts found marketers who promote on sites like Facebook and New York Times are not as successful at reaching consumers while marketers who promote more on niche websites have a better chance of reaching their audiences.[23]Customer audiences are also broadcasters with the power for circulation and permanence of CGM, businesses lose influence. Rather than trying to position a product using static messages, companies can become the subject of conversation amongst atarget marketthat has already discussed, positioned and rated the product. This also means that consumers can now choose not only when and how but, also, if they will engage with marketing communications.[2]In addition, new media provides consumers with more control over advertising consumption.[24]
Research shows the importance of customer engagement in the modern market. The lowering of entry barriers, such as the need for a sales force, access to channels and physical assets, and the geographical widening of the market due to the internet have brought about increasing competition and a decrease in brand loyalty. In combination with lower switching costs, easier access to information about products and suppliers and increased choice,brand loyaltyis hard to achieve. The increasing ineffectiveness of television advertising is due to the shift of consumer attention to the internet and new media, which controls advertising consumption and causes a decrease in audience size.[25]A study conducted by Salesforce shows an overwhelming 8% of customers acknowledge that their experience with the business is equivalent to the quality of its products or services.[citation needed]Therefore, it is important to prioritize customer engagement as a business strategy.
The proliferation of media that provide consumers with more control over their advertising consumption (subscription-based digital radio and TV) and the simultaneous decrease of trust in advertising and increase of trust in peers[19]point to the need for communications that the customer will desire to engage with. Stimulating a consumer's engagement with a brand is the only way to increase brand loyalty and, therefore, "the best measure of current and future performance".[25]
CE behaviour became prominent with the advent of the social phenomenon of online CE. Creating and stimulating customer engagement behaviour has recently become an explicit aim of both profit and non-profit organisations in the belief that engaging target customers to a high degree is conducive to furthering business objectives.
Shevlin's definition of CE is well suited to understanding the process that leads to an engaged customer. In its adaptation by Richard Sedley the key word is 'investment'. "Repeated interactions that strengthen the emotional, psychological or physical investment a customer has in a brand."[This quote needs a citation]
A customer's degree of engagement with a company lies in a continuum that represents the strength of his investment in that company. Positive experiences with the company strengthen that investment and move the customer down the line of engagement.
What is important in measuring degrees of involvement is the ability of defining and quantifying the stages on the continuum. One popular suggestion is a four-level model adapted from Kirkpatrick's Levels:
Concerns have, however, been expressed as regards the measurability of stages three and four. Another popular suggestion isGhuneim'stypology of engagement.[26]
The following consumer typology according to degree of engagement fits also into Ghuneim's continuum: creators (smallest group), critics, collectors, couch potatoes (largest group).[27]
Engagement is a holistic characterization of a consumer's behavior, encompassing a host of sub-aspects of behaviour such as loyalty, satisfaction, involvement, word-of-mouth advertising, complaining and more.
The behavioural outcomes of an engaged consumer are what links CE to profits. From this point of view,
"CE is the best measure of current and future performance; an engaged relationship is probably the only guarantee for a return on your organization's or your clients' objectives."[28]Simply attaining a high level of customer satisfaction does not seem to guarantee the customer's business. 60% to 80% of customers who defect to a competitor said they were satisfied or very satisfied on the survey just prior to their defection.[2]: 32
The main difference between traditional and customer engagement marketing is marked by these shifts:
Specific marketing practices involve:
All marketing practices, includinginternet marketing, include measuring the effectiveness of various media along the customer engagement cycle, as consumers travel from awareness to purchase. Often the use ofCVP Analysisfactors into strategy decisions, including budgets and media placement.
The CE metric is useful for:
a) Planning:
b) Measuring Effectiveness: Measure how successful CE-marketing efforts have been at engaging target customers.
The importance of CE as a marketing metric is reflected in ARF's statement:
"The industry is moving toward customer engagement with marketing communications as the 21st century metric of marketing efficiency and effectiveness."[29]
ARF envisages CE exclusively as a metric of engagement with communication, but it is not necessary to distinguish between engaging with the communication and with the product since CE behaviour deals with, and is influenced by, involvement with both.
In order to be operational, CE-metrics must be combined with psychodemographics. It is not enough to know that a website has 500 highly engaged members, for instance; it is imperative to know what percentage are members of the company's target market.[30]As a metric for effectiveness, Scott Karp suggests, CE is the solution to the same intractable problems that have long been a struggle for old media: how to prove value.[31]
The CE-metric is synthetic and integrates a number of variables. The World Federation of Advertisers calls it 'consumer-centric holistic measurement'.[32]The following items have all been proposed as components of a CE-metric:
Root metrics
Action metrics
In selecting the components of a CE-metric, the following issues must be resolved:
|
https://en.wikipedia.org/wiki/Customer_engagement
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
Liquid democracyis a form ofProxy voting,[1]whereby anelectorateengages in collective decision-making through direct participation and dynamic representation.[2]This democratic system utilizes elements of bothdirectandrepresentative democracy. Voters in a liquid democracy have the right to vote directly on all policy issuesà ladirect democracy; voters also have the option to delegate their votes to someone who will vote on their behalfà larepresentative democracy.[2]Any individual may be delegated votes (those delegated votes are termed "proxies") and these proxies may in turn delegate their vote as well as any votes they have been delegated by others resulting in "metadelegation".[3]
This delegation of votes may be absolute (an individual divests their vote to someone else across all issues), policy-specific (an individual divests their vote to someone only when the vote concerns a certain issue), time-sensitive (an individual decides to divest their vote for a period of time), or not utilized by voters.[2]In the case of absolute delegation, the voter situates themselves as a participant in a representative democracy; however, they have the right to revoke their vote delegation at any time.[3]The appeal of the retractability mechanism stems from an increased accountability imposed on representatives.[3]In policy-specific delegation, voters may also select different delegates for different issues.[4]Voters may select representatives they feel are more equipped to adjudicate in unfamiliar fields due to elevated expertise, personal experience, or another indicator of competence.[5]Moreover, automatic recall allows citizens to be as engaged in political affairs as the rest of their lives permit. A voter may delegate their vote completely one week but decide to participate fully another. For those who wish to exercise their right to vote on all political matters, liquid democracy provides the flexibility to retain the option of direct democracy.
Most of the available academic literature on liquid democracy is based on empirical research rather than on specific conceptualization or theories. Experiments have mostly been conducted on a local-level or exclusively through online platforms, however polity examples are listed below.
In 1884,Charles Dodgson(more commonly referred to by his pseudonym Lewis Carroll), the author of the novelAlice in Wonderland, first envisioned the notion of transitive or "liquid" voting in his pamphletThe Principles of Parliamentary Representation.[6]Dodgson expounded a system predicated on multi-member districts where each voter casts a single vote or possesses the ability to transfer votes akin to the modern concept of liquid democracy.[7]Bryan Ford in his paper "Delegative Democracy" says this could be seen as the first step towards liquid democracy.[8]
The first institutionalized attempts at liquid democracy can be traced back to the work of Oregon reformerWilliam S. U'Ren.[9]In 1912, he lobbied forinteractive representation(the Proxy Plan of Representation),[10]where the elected politicians' influence would be weighted with regard to the number of votes each had received.[11]
A few decades later, around 1967,Gordon Tullocksuggested that voters could choose their representatives or vote themselves in parliament "by wire", while debates were broadcast on television.James C. Millerfavored the idea that everybody should have the possibility to vote on any question themselves or to appoint a representative who could transmit their inquiries. Soon after Miller argued in favor of liquid democracy, in 1970Martin Shubikcalled the process an "instant referendum". Nonetheless, Shubik was concerned about the speed of decision-making and how it might influence the time available for public debates.[12]
In the 21st century, based on the work of Jabbusch and James Green-Armytage,[13]technological innovation has made liquid democracy more feasible to implement. The first online liquid democracy applications originated in Berlin, Germany following political disillusionment and the emergence of hacker culture.[6]Since liquid democracy gained traction in Germany, variations of liquid democratic forms have developed globally in political and economic spheres (examples listed at the bottom of the article).
The prototypical liquid democracy has been summarized as containing the following principles:
Variations on this general model also exist, and this outline is only mentioned here for orientation within a general model. For example, in the "Joy of Revolution",[15]delegates are left open to being specialized at the time of each individual's delegation of authority. Additionally, general principles of fluidity can often be applied to the concept such that individuals, for example through thesingle transferable vote, can revise their vote at any time by modifying their registered delegation, sometimes called "proxy" with the governing organization.[16]
Liquid democracy utilizes the foundation of proxy voting but differs from this earlier model in the degree of scale. Unlike proxy voting, liquid democratic votes may be delegated to a proxy and the proxy may delegate their votes (individual and proxies) to an additional proxy. This process is termed "metadelegation".[3]Though an individual's vote may be delegated numerous times, they retain the right to automatic recall.[2]If someone who delegated their vote disagrees with the choices of their representative or proxy, they may either vote themselves or select another delegate for the next vote.[8]
Crucial to the understanding of liquid democracy is the theory's view of the role of delegation inrepresentative democracy. Representative democracy is seen as a form of governance whereby a single winner is determined for a predefined jurisdiction, with a change of delegation only occurring after the preset term length. In some instances, terms can be cut short by a forced recall election, though the recalled candidate can win the subsequent electoral challenge and carry out their term. The paradigm of representative democracy is contrasted with the delegative form implemented in liquid democracy. Delegates may not have specific limits on their term as delegates, nor do they represent specific jurisdictions. Some key differences include:
In contrast torepresentative democracy, within liquid democracy, delegations of votes are transitive and not dependent on election periods. The concept of liquid democracy describes a form of collective decision-making, which combines elements of direct democracy and representative democracy through the use of software. This allows voters to either vote on issues directly, or to delegate their voting power to a trusted person or party. Moreover, participants are empowered to withdraw their votes at any time.
Voting periods can go through many stages before the final votes are computed. Also, when voters make use of thedelegationoption, the delegators are able to see what happened to their vote, ensuring the accountability of the system. The fact that delegators can revoke their votes from theirrepresentative, is another significant aspect of how liquid democracy can potentially refine contemporary representative democracy concepts.
By allowing to revoke votes at any time, society can replace representatives who are not providing ideal results and choose more promising representatives. In this way, voters are enabled to effectively choose the most appropriate or competent topic-specific representatives and members of a community orelectorate, in real-time, can shape the well-being of their commons, by excluding undesired decision-makers and promoting the desired ones. The voting softwareLiquidFeedbackfor instance, through its connotation of liquidity, accounts to this real-time aspect, potentially providing a constantly changing representation of the voting community's current opinion.
Regarding objective-technological elements among liquid democracy software examples, it is reasonable to determine that they originally were not developed with an intention to replace the current and firmly established processes of decision-making inpolitical partiesor local governments. Based on academic research, it is significantly rather the case that liquid democracy software possesses the intrinsic function to contribute additional and alternative value to the processes of traditional elections, channels of communication and discussion, or public consultation.
Direct democracyis a form of democracy where all collective decisions are made by the direct voting contributions of individual citizens.[19]Though often perceived to be truly direct (e.g. only self-representation), direct democracies of the past, most notably in Athens, have utilized some form of representation.[20]Thus, the distinction between direct democracy lies not in liquid democracy's representative nature, but rather in the transitory method of delegation.[2]Liquid democracy is a sort of voluntary direct democracy in that participants can be included in decisions (and are usually expected to be, by default) but can opt out by way of abstaining or delegating their voting to someone else when they lack the time, interest, or expertise to vote on the delegated matter. By contrast, in direct democracy, all eligible voters are expected to stay knowledgeable on all events and political issues, since voters make every decision on these political issues.[21]Liquid democracy is said then to provide a more modern and flexible alternative to the systems of ancient Greece.[20]Their ongoing vulnerability to other, less democratic city states, cumulating in their resounding defeat in thePeloponnesian war, may be explained along such lines.
Liquid democracy may naturally evolve into a type ofmeritocracywith decisions being delegated to those with knowledge on or personal experience of a specific subject.[5]Nonetheless, in the admittedly few issues where there exists a clear "ground truth" or "correct answer", Caragiannis and Micha concluded a subset of supposedly more informed voters within a larger populace would be less adept at identifying the ground truth than if every voter had voted directly or if all votes had been delegated to one supreme dictator.[22]
Bryan Ford explains that some of the current challenges to liquid democracy include the unintended concentration of delegated votes due to large numbers of people participating in platforms and decision making; building more secure and decentralized implementation of online platforms in order to avoid unscrupulous administrators or hackers; shortening the thresholds between voter privacy; and delegate accountability.[23]
[5]Similar to electoral political systems, the concept of "distinction" is of central importance.[5]Rather than empowering the general public, liquid democracy could concentrate power into the hands of a socially prominent, politically strategic, and wealthy few.[2]Helene Landemore, a Political Science professor at Yale University, describes this phenomenon as "star-voting" in particular.[5]Thus, apparently, the normative ideal of meritocracy may herein always devolve into mere oligarchy. She goes on to argue in this vein that individuals should have a right of permanent recall whereby voters who have delegated their vote to another individual may, at any time, retract their delegation and vote autonomously.[5]However, the ability to automatically recall one's vote regarding any policy decision leads to an issue of policy inconsistency as different policies are voted on by different subsets of society.[2]
In large nation states with millions of voting citizens, it is likely the body of "liquid representatives" (those who have been delegated other citizen votes) will be significant. Consequently, deliberation and representation become pertinent concerns. To achieve meaningful deliberation, the liquid representatives would have to be split into numerous groups to attain a somewhat manageable discussion group size.[5]As for representation, liquid democracy suffers from a similar issue facing electoral representative democracies where a single individual embodies the will of millions.[5]Some may argue that such is universally undesirable. Most of the aforementioned moral intuitions may easily be serve as areductioto today's more coarse-graining representative democracies as well. The diagram on the right illustrates one possible implementation of liquid democracy working at a national scale.
In most developing countries, not every citizen has access to a smartphone, computer, or internet connection. In some developed countries, the same is true; in the United States, for example, as of 2021, 85% of American adults own a smartphone resulting in 15% of citizens without access.[24]This technological disparity both in access and knowledge would result in a more unbalanced participation than what already exists.[25]
Google experimented with liquid democracy through an internal social network system known asGoogle Votes.[26]This liquid democratic experiment constitutes one of the less common corporate examples. Users of the existing Google+ platform were the voters and built-in discussion functions provided the deliberative element.[26]In this instance,Google Voteswas used to select meal offerings.[26]Nonetheless, researchers came away with a number of recommendations regarding future implementations of liquid democracy on online platforms including delegation recommendations based on prior choices, issue recommendations based on prior participation, and delegation notifications to inform voters about their relative power.[26]
Pirate Parties, parties focusing on reducing online censorship and increasing transparency, first came around in Sweden in the year 2006.[2]Pirate Parties in Germany,[27]Italy, Austria, Norway, France and the Netherlands[28]use liquid democracy with the open-source softwareLiquidFeedback.[29]
Specifically in the case of the Pirate Party in Germany, the communication with citizens uses tools and platforms similar to conventional parties – including Facebook, Twitter, and online sites – however, they developed the "piratewiki" project. This is an open platform opened to collaborative contributions to the political deliberative process.[30]"Liquid Feedback" was the platform used by the German Pirate Party since 2006, which allowed users to become a part of inner party decision making process.[29][31]
Into the 2010s, virtual platforms have been created in Argentina. Democracia en Red is a group of Latin Americans who seek a redistribution of political power and a more inclusive discussion.[32]They created Democracy OS, a platform which allows internet users to propose, debate and vote on different topics. Pia Mancini argues that the platform opens up democratic conversation and upgrades democratic decision making to the internet era.
The first example of liquid democracy using a software program in a real political setting involved the local political party Demoex in Vallentuna, a suburb of Stockholm: the teacher Per Norbäck and the entrepreneurMikael Nordfors[sv]used software called NetConference Plus. This software is no longer supported after the bankruptcy of the manufacturing company, Vivarto AB. The party had a seat in the local parliament between 2002−2014, where the members decide how their representative shall vote with the help of internet votations.[33]Since then, Demoex and two other parties have formedDirektdemokraterna.[34]
An experimental form of liquid democracy called Civicracy was tested at theVienna University of Technologyin 2012.[35]It created a council of representatives based on a continuous vote of confidence from participants, similar to modern parliaments. The relative liquidity of votes was lessened by a dampening algorithm intended to ensure representation stability.[35]Despite extensive planning, the real-world experiment was not conducted due to a lack of favorability.[35]
The district ofFrieslandinGermanyhas implemented some usage of a platform calledLiquidFriesland, but it has not succeeded in radically changing the mode of governance there. The platform, designed as a form of Liquid Democracy, has achieved mixed results
The implementation and the use of the LiquidFriesland platform was clearly dominated by the bureaucratic style of communication and working procedures. The citizen participation on the platform was inscribed in the hierarchical structure, where suggestions on the platform were regarded as inputs for the bureaucratic black box, but by no means as part of the decision-making process inside it. The communication with main stakeholders – the users of the platform – was being structured according to the same logic and was not rebuilt in the course of the project.
No regulation was planned to be initially adapted to allow local politicians to conduct the process of legal drafting on the LiquidFriesland platform. As for the delegation aspect of LiquidFriesland, it has never been specified in any regulatory documents. No more than 500 citizens registered on LiquidFriesland and activated their accounts. Only 20% of the activated users logged in to the platform and only 10% have shown some activity on LiquidFriesland.
Software implementations:
|
https://en.wikipedia.org/wiki/Delegative_Democracy
|
Enterprise bookmarkingis a method forWeb 2.0users to tag, organize, store, and searchbookmarksof bothweb pageson theInternetand data resources stored in a distributeddatabaseorfileserver. This is done collectively and collaboratively in a process by which users addtag (metadata)andknowledge tags.[1]
In early versions of the software, these tags are applied as non-hierarchicalkeywords, or terms assigned by a user to aweb page, and are collected intag clouds.[2]Examples of this software areConnectbeamandDogear. New versions of the software such asJumper 2.0andKnowledge Plazaexpand tag metadata in the form ofknowledge tagsthat provide additional information about the data and are applied tostructuredandsemi-structured dataand are collected in tag profiles.[3]
Enterprise bookmarking is derived fromSocial bookmarkingthat got its modern start with the launch of the websitedel.icio.usin 2003. The first major announcement of an enterprise bookmarking platform was the IBM Dogear project, developed in Summer 2006.[4]Version 1.0 of the Dogear software was announced atLotusphere2007,[5]and shipped later that year on June 27 as part ofIBM Lotus Connections. The second significant commercial release was Cogenz in September 2007.[6]
Since these early releases, Enterprise bookmarking platforms have diverged considerably. The most significant new release was theJumper 2.0platform, with expanded and customizable knowledge tagging fields.[7]
In a social bookmarking system, individuals create personal collections of bookmarks and share their bookmarks with others. These centrally stored collections of Internet resources can be accessed by other users to find useful resources. Often these lists are publicly accessible, so that other people with similar interests can view the links by category or by the tags themselves. Most social bookmarking sites allow users to search for bookmarks which are associated with given "tags", and rank the resources by the number of users which have bookmarked them.[8]
Enterprise bookmarking is a method of tagging and linking any information using an expanded set of tags to capture knowledge about data.[9]It collects and indexes these tags in a web-infrastructureknowledge baseserver residing behind the firewall. Users can shareknowledge tagswith specified people or groups, shared only inside specific networks, typically within an organization. Enterprise bookmarking is aknowledge managementdiscipline that embracesEnterprise 2.0methodologies to capture specific knowledge and information that organizations consider proprietary and are not shared on the public Internet.
Enterprise bookmarking tools also differ from social bookmarking tools in the way that they often face an existingtaxonomy. Some of these tools have evolved to provideTag managementwhich is the combination of uphill abilities (e.g.faceted classification, predefined tags, etc.) and downhill gardening abilities (e.g. tag renaming, moving, merging) to better manage the bottom-upfolksonomygenerated from user tagging.
|
https://en.wikipedia.org/wiki/Enterprise_bookmarking
|
Firefly.com(1995–1999) was acommunitywebsite featuringcollaborative filtering.[citation needed]
The Firefly website was created by Firefly Network, Inc.(originally known as Agents Inc.)[1]The company was founded in March 1995 by a group of engineers fromMIT Media Laband some business people fromHarvard Business School, includingPattie Maes(Media Lab professor), Upendra Shardanand,Nick Grouf, Max Metral, David Waxman and Yezdi Lashkari.[2]At the Media Lab, under the supervision of Maes, some of the engineers built a musicrecommendation systemcalled HOMR (Helpful Online Music Recommendation Service; preceded by RINGO, an email-based system) which usedcollaborative filteringto help navigate the music domain to find other artists and albums that a user might like.[3]With Matt Bruck and Khinlei Myint-U, the team wrote a business plan and Agents Inc took second place in the 1995 MIT 10K student business plan competition.[4]Firefly's core technology was based on the work done on HOMR.[5]
The Firefly website was launched in October 1995.[6]It went through several iterations but remained acommunitythroughout. It was initially created as a community for users to navigate and discover new musical artists and albums. Later it was changed to allow users to discover movies, websites, and communities as well.
Firefly technology was adopted by a number of well-known businesses, including the recommendation engine for barnesandnoble.com, ZDnet, launch.com (later purchased by Yahoo) and MyYahoo.[7]
Since Firefly was amassing large amounts of profile data from its users, privacy became a big concern of the company.[7]They worked with the Federal Government to help defineconsumer privacyprotection in the digital age. They also were key contributors to OPS (Open Profiling Standard), a recommendation to theW3C(along withNetscapeandVeriSign) to what eventually became known as theP3P(Platform for Privacy Preferences).
In April 1998, Microsoft purchased Firefly, presumably because of their innovations in privacy, and their long-term goal of creating a safe marketplace for consumers' profile data which the consumer controlled.[8]The Firefly team at Microsoft was largely responsible for the first versions ofMicrosoft Passport.
Microsoft shut down the website in August 1999.[9]
The Firefly website had distinctive design and graphics. Early designs featured bright colors and a fun and eclectic look. Later redesigns reflected the company's push towards corporate customers and desire to de-emphasize the Firefly community website.
Some screenshots of Firefly are in.[10]
|
https://en.wikipedia.org/wiki/Firefly_(website)
|
Afilter bubbleorideological frameis a state of intellectual isolation[1]that can result frompersonalized searches, recommendation systems, andalgorithmic curation. The search results are based on information about the user, such as their location, past click-behavior, and search history.[2]Consequently, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles, resulting in a limited and customized view of the world.[3]The choices made by these algorithms are only sometimes transparent.[4]Prime examples includeGoogle Personalized Searchresults and Facebook's personalized news-stream.
However, there are conflicting reports about the extent to which personalized filtering happens and whether such activity is beneficial or harmful, with various studies producing inconclusive results.
The termfilter bubblewas coined by internet activistEli Parisercirca 2010. In Pariser's influential book under the same name,The Filter Bubble(2011), it was predicted that individualized personalization by algorithmic filtering would lead to intellectual isolation and social fragmentation.[5]The bubble effect may have negative implications for civicdiscourse, according to Pariser, but contrasting views regard the effect as minimal[6]and addressable.[7]According to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their informational bubble.[8]He related an example in which one user searched Google for "BP" and got investment news aboutBP, while another searcher got information about theDeepwater Horizon oil spill, noting that the two search results pages were "strikingly different" despite use of the same key words.[8][9][10][6]The results of theU.S. presidential election in 2016have been associated with the influence of social media platforms such as Twitter and Facebook,[11]and as a result have called into question the effects of the "filter bubble" phenomenon on user exposure tofake newsandecho chambers,[12]spurring new interest in the term,[13]with many concerned that the phenomenon may harm democracy andwell-beingby making the effects of misinformation worse.[14][15][13][16][17][18]
Pariser defined his concept of a filter bubble in more formal terms as "that personalecosystemofinformationthat's been catered by these algorithms."[8]An internet user's past browsing and search history is built up over time when they indicate interest in topics by "clicking links, viewing friends, putting movies in [their] queue, reading news stories," and so forth.[19]An internet firm then uses this information totarget advertisingto the user, or make certain types of information appear more prominently insearch results pages.[19]
This process is not random, as it operates under a three-step process, per Pariser, who states, "First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune in to get the fit just right. Your identity shapes your media."[20]Pariser also reports:
According to oneWall Street Journal study, the top fifty Internet sites, fromCNNtoYahootoMSN, install an average of 64 data-laden cookies and personal tracking beacons. Search for a word like "depression" on Dictionary.com, and the site installs up to 223 tracking cookies and beacons on your computer so that other Web sites can target you with antidepressants. Share an article about cooking on ABC News, and you may be chased around the Web by ads for Teflon-coated pots. Open—even for an instant—a page listing signs that your spouse may be cheating and prepare to be haunted by DNA paternity-test ads.[21]
Accessing the data of link clicks displayed through site traffic measurements determines that filter bubbles can be collective or individual.[22]
As of 2011, one engineer had told Pariser that Google looked at 57 different pieces of data to personally tailor a user's search results, including non-cookie data such as the type of computer being used and the user's physical location.[23]
Pariser's idea of the filter bubble was popularized after theTED talkin May 2011, in which he gave examples of how filter bubbles work and where they can be seen. In a test seeking to demonstrate the filter bubble effect, Pariser asked several friends to search for the word "Egypt" on Google and send him the results. Comparing two of the friends' first pages of results, while there was overlap between them on topics like news and travel, one friend's results prominently included links to information on the then-ongoingEgyptian revolution of 2011, while the other friend's first page of results did not include such links.[24]
InThe Filter Bubble, Pariser warns that a potential downside to filtered searching is that it "closes us off to new ideas, subjects, and important information,"[25]and "creates the impression that our narrow self-interest is all that exists."[9]In his view, filter bubbles are potentially harmful to both individuals and society. He criticizedGoogleandFacebookfor offering users "too much candy and not enough carrots."[26]He warned that "invisible algorithmic editing of the web" may limit our exposure to new information and narrow our outlook.[26]According to Pariser, the detrimental effects of filter bubbles include harm to the general society in the sense that they have the possibility of "undermining civic discourse" and making people more vulnerable to "propaganda and manipulation."[9]He wrote:
A world constructed from the familiar is a world in which there's nothing to learn ... (since there is) invisible autopropaganda, indoctrinating us with our own ideas.
Many people are unaware that filter bubbles even exist. This can be seen in an article inThe Guardian, which mentioned the fact that "more than 60% of Facebook users are entirely unaware of any curation on Facebook at all, believing instead that every single story from their friends and followed pages appeared in their news feed."[28]A brief explanation for how Facebook decides what goes on a user's news feed is through an algorithm that takes into account "how you have interacted with similar posts in the past."[28]
A filter bubble has been described as exacerbating a phenomenon that calledsplinternetorcyberbalkanization,[Note 1]which happens when the internet becomes divided into sub-groups of like-minded people who become insulated within their own online community and fail to get exposure to different views. This concern dates back to the early days of the publicly accessible internet, with the term "cyberbalkanization" being coined in 1996.[29][30][31]Other terms have been used to describe this phenomenon, including "ideological frames"[9]and "the figurative sphere surrounding you as you search the internet."[19]
The concept of a filter bubble has been extended into other areas, to describe societies that self-segregate according political views but also economic, social, and cultural situations.[32]That bubbling results in a loss of the broader community and creates the sense that for example, children do not belong at social events unless those events were especially planned to be appealing for children and unappealing for adults without children.[32]
Barack Obama's farewell addressidentified a similar concept to filter bubbles as a "threat to [Americans'] democracy," i.e., the "retreat into our own bubbles, ...especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions... And increasingly, we become so secure in our bubbles that we start accepting only information, whether it's true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there."[33]
Both "echo chambers" and "filter bubbles" describe situations where individuals are exposed to a narrow range of opinions and perspectives that reinforce their existing beliefs and biases, but there are some subtle differences between the two, especially in practices surrounding social media.[34][35]
Specific tonews media, an echo chamber is a metaphorical description of a situation in which beliefs are amplified or reinforced by communication and repetition inside a closed system.[36][37]Based on the sociological concept ofselective exposure theory, the term is a metaphor based on the acoustic echo chamber, where soundsreverberatein a hollow enclosure. With regard to social media, this sort of situation feeds into explicit mechanisms ofself-selected personalization, which describes all processes in which users of a given platform can actively opt in and out of information consumption, such as a user's ability to follow other users or select into groups.[38]
In an echo chamber, people are able to seek out information that reinforces their existing views, potentially as an unconscious exercise ofconfirmation bias. This sort of feedback regulation may increase political andsocial polarizationand extremism. This can lead to users aggregating into homophilic clusters within social networks, which contributes to group polarization.[39]"Echo chambers" reinforce an individual's beliefs without factual support. Individuals are surrounded by those who acknowledge and follow the same viewpoints, but they also possess the agency to break outside of the echo chambers.[40]
On the other hand, filter bubbles are implicit mechanisms ofpre-selected personalization, where a user'smedia consumptionis created by personalized algorithms; the content a user sees is filtered through an AI-driven algorithm that reinforces their existing beliefs and preferences, potentially excluding contrary or diverse perspectives. In this case, users have a more passive role and are perceived as victims of a technology that automatically limits their exposure to information that would challenge their world view.[38]Some researchers argue, however, that because users still play an active role in selectively curating their own newsfeeds and information sources through their interactions with search engines and social media networks, that they directly assist in the filtering process by AI-driven algorithms, thus effectively engaging in self-segregating filter bubbles.[41]
Despite their differences, the usage of these terms go hand-in-hand in both academic and platform studies. It is often hard to distinguish between the two concepts in social network studies, due to limitations in accessibility of the filtering algorithms, that perhaps could enable researchers to compare and contrast the agencies of the two concepts.[42]This type of research will continue to grow more difficult to conduct, as many social media networks have also begun to limit API access needed for academic research.[43]
There are conflicting reports about the extent to which personalized filtering happens and whether such activity is beneficial or harmful. Analyst Jacob Weisberg, writing in June 2011 forSlate, did a small non-scientific experiment to test Pariser's theory which involved five associates with different ideological backgrounds conducting a series of searches, "John Boehner," "Barney Frank," "Ryan plan," and "Obamacare," and sending Weisberg screenshots of their results. The results varied only in minor respects from person to person, and any differences did not appear to be ideology-related, leading Weisberg to conclude that a filter bubble was not in effect, and to write that the idea that most internet users were "feeding at the trough of aDaily Me" was overblown.[9]Weisberg asked Google to comment, and a spokesperson stated that algorithms were in place to deliberately "limit personalization and promote variety."[9]Book reviewer Paul Boutin did a similar experiment to Weisberg's among people with differing search histories and again found that the different searchers received nearly identical search results.[6]Interviewing programmers at Google, off the record, journalistPer Grankvist[sv]found that user data used to play a bigger role in determining search results but that Google, through testing, found that the search query is by far the best determinant of what results to display.[44]
There are reports that Google and other sites maintain vast "dossiers" of information on their users, which might enable them to personalize individual internet experiences further if they choose to do so. For instance, the technology exists for Google to keep track of users' histories even if they don't have a personal Google account or are not logged into one.[6]One report stated that Google had collected "10 years' worth" of information amassed from varying sources, such asGmail,Google Maps, and other services besides its search engine,[10][failed verification]although a contrary report was that trying to personalize the internet for each user, was technically challenging for an internet firm to achieve despite the huge amounts of available data.[citation needed]Analyst Doug Gross ofCNNsuggested that filtered searching seemed to be more helpful forconsumersthan forcitizens, and would help a consumer looking for "pizza" find local delivery options based on a personalized search and appropriately filter out distant pizza stores.[10][failed verification]Organizations such asThe Washington Post,The New York Times, and others have experimented with creating new personalized information services, with the aim of tailoring search results to those that users are likely to like or agree with.[9]
A scientific study fromWhartonthat analyzedpersonalized recommendationsalso found that these filters can create commonality, not fragmentation, in online music taste.[45]Consumers reportedly use the filters to expand their taste rather than to limit it.[45]Harvard law professorJonathan Zittraindisputed the extent to which personalization filters distort Google search results, saying that "the effects of search personalization have been light."[9]Further, Google provides the ability for users to shut off personalization features if they choose[46]by deleting Google's record of their search history and setting Google not to remember their search keywords and visited links in the future.[6]
A study fromInternet Policy Reviewaddressed the lack of a clear and testable definition for filter bubbles across disciplines; this often results in researchers defining and studying filter bubbles in different ways.[47]Subsequently, the study explained a lack of empirical data for the existence of filter bubbles across disciplines[12]and suggested that the effects attributed to them may stem more from preexisting ideological biases than from algorithms. Similar views can be found in other academic projects, which also address concerns with the definitions of filter bubbles and the relationships between ideological and technological factors associated with them.[48]A critical review of filter bubbles suggested that "the filter bubble thesis often posits a special kind of political human who has opinions that are strong, but at the same time highly malleable" and that it is a "paradox that people have an active agency when they select content but are passive receivers once they are exposed to the algorithmically curated content recommended to them."[49]
A study by Oxford, Stanford, and Microsoft researchers examined the browsing histories of 1.2 million U.S. users of theBing Toolbaradd-on for Internet Explorer between March and May 2013. They selected 50,000 of those users who were active news consumers, then classified whether the news outlets they visited were left- or right-leaning, based on whether the majority of voters in the counties associated with user IP addresses voted for Obama or Romney in the 2012 presidential election. They then identified whether news stories were read after accessing the publisher's site directly, via the Google News aggregation service, web searches, or social media. The researchers found that while web searches and social media do contribute to ideological segregation, the vast majority of online news consumption consisted of users directly visiting left- or right-leaning mainstream news sites and consequently being exposed almost exclusively to views from a single side of the political spectrum. Limitations of the study included selection issues such as Internet Explorer users skewing higher in age than the general internet population; Bing Toolbar usage and the voluntary (or unknowing) sharing of browsing history selection for users who are less concerned about privacy; the assumption that all stories in left-leaning publications are left-leaning, and the same for right-leaning; and the possibility that users who arenotactive news consumers may get most of their news via social media, and thus experience stronger effects of social oralgorithmic biasthan those users who essentially self-select their bias through their choice of news publications (assuming they are aware of the publications' biases).[50]
A study by Princeton University and New York University researchers aimed to study the impact of filter bubble and algorithmic filtering on social media polarization. They used a mathematical model called the "stochastic block model" to test their hypothesis on the environments of Reddit and Twitter. The researchers gauged changes in polarization in regularized social media networks and non-regularized networks, specifically measuring the percent changes in polarization and disagreement on Reddit and Twitter. They found that polarization increased significantly at 400% in non-regularized networks, while polarization increased by 4% in regularized networks and disagreement by 5%.[51]
While algorithms do limit political diversity, some of the filter bubbles are the result of user choice.[52]A study by data scientists at Facebook found that users have one friend with contrasting views for every four Facebook friends that share an ideology.[53][54]No matter what Facebook's algorithm for itsNews Feedis, people are more likely to befriend/follow people who share similar beliefs.[53]The nature of the algorithm is that it ranks stories based on a user's history, resulting in a reduction of the "politically cross-cutting content by 5 percent for conservatives and 8 percent for liberals."[53]However, even when people are given the option to click on a link offering contrasting views, they still default to their most viewed sources.[53]"[U]ser choice decreases the likelihood of clicking on a cross-cutting link by 17 percent for conservatives and 6 percent for liberals."[53]A cross-cutting link is one that introduces a different point of view than the user's presumed point of view or what the website has pegged as the user's beliefs.[55]A recent study from Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro suggest that online media isn't the driving force for political polarization.[56]The paper argues that polarization has been driven by the demographic groups that spend the least time online. The greatest ideological divide is experienced amongst Americans older than 75, while only 20% reported using social media as of 2012. In contrast, 80% of Americans aged 18–39 reported using social media as of 2012. The data suggests that the younger demographic isn't any more polarized in 2012 than it had been when online media barely existed in 1996. The study highlights differences between age groups and how news consumption remains polarized as people seek information that appeals to their preconceptions. Older Americans usually remain stagnant in their political views as traditional media outlets continue to be a primary source of news, while online media is the leading source for the younger demographic. Although algorithms and filter bubbles weaken content diversity, this study reveals that political polarization trends are primarily driven by pre-existing views and failure to recognize outside sources. A 2020 study from Germany utilized the Big Five Psychology model to test the effects of individual personality, demographics, and ideologies on user news consumption.[57]Basing their study on the notion that the number of news sources that users consume impacts their likelihood to be caught in a filter bubble—with higher media diversity lessening the chances—their results suggest that certain demographics (higher age and male) along with certain personality traits (high openness) correlate positively with a number of news sources consumed by individuals. The study also found a negative ideological association between media diversity and the degree to which users align with right-wing authoritarianism. Beyond offering different individual user factors that may influence the role of user choice, this study also raises questions and associations between the likelihood of users being caught in filter bubbles and user voting behavior.[57]
The Facebook study found that it was "inconclusive" whether or not the algorithm played as big a role in filteringNews Feedsas people assumed.[58]The study also found that "individual choice," or confirmation bias, likewise affected what gets filtered out of News Feeds.[58]Some social scientists criticized this conclusion because the point of protesting the filter bubble is that the algorithms and individual choice work together to filter out News Feeds.[59]They also criticized Facebook's small sample size, which is about "9% of actual Facebook users," and the fact that the study results are "not reproducible" due to the fact that the study was conducted by "Facebook scientists" who had access to data that Facebook does not make available to outside researchers.[60]
Though the study found that only about 15–20% of the average user's Facebook friends subscribe to the opposite side of the political spectrum, Julia Kaman fromVoxtheorized that this could have potentially positive implications for viewpoint diversity. These "friends" are often acquaintances with whom we would not likely share our politics without the internet. Facebook may foster a unique environment where a user sees and possibly interacts with content posted or re-posted by these "second-tier" friends. The study found that "24 percent of the news items liberals saw were conservative-leaning and 38 percent of the news conservatives saw was liberal-leaning."[61]"Liberals tend to be connected to fewer friends who share information from the other side, compared with their conservative counterparts."[62]This interplay has the ability to provide diverse information and sources that could moderate users' views.
Similarly, a study ofTwitter's filter bubbles byNew York Universityconcluded that "Individuals now have access to a wider span of viewpoints about news events, and most of this information is not coming through the traditional channels, but either directly from political actors or through their friends and relatives. Furthermore, the interactive nature ofsocial mediacreates opportunities for individuals to discuss political events with their peers, including those with whom they have weak social ties."[63]According to these studies, social media may be diversifying information and opinions users come into contact with, though there is much speculation around filter bubbles and their ability to create deeperpolitical polarization.
One driver and possible solution to the problem is the role of emotions in online content. A 2018 study shows that different emotions of messages can lead to polarization or convergence: joy is prevalent in emotional polarization, while sadness and fear play significant roles in emotional convergence.[64]Since it is relatively easy to detect the emotional content of messages, these findings can help to design more socially responsible algorithms by starting to focus on the emotional content of algorithmic recommendations.
Social botshave been utilized by different researchers to test polarization and related effects that are attributed to filter bubbles and echo chambers.[65][66]A 2018 study used social bots on Twitter to test deliberate user exposure to partisan viewpoints.[65]The study claimed it demonstrated partisan differences between exposure to differing views, although it warned that the findings should be limited to party-registered American Twitter users. One of the main findings was that after exposure to differing views (provided by the bots), self-registered republicans became more conservative, whereas self-registered liberals showed less ideological change if none at all. A different study from The People's Republic of China utilized social bots onWeibo—the largest social media platform in China—to examine the structure of filter bubbles regarding to their effects on polarization.[66]The study draws a distinction between two conceptions of polarization. One being where people with similar views form groups, share similar opinions, and block themselves from differing viewpoints (opinion polarization), and the other being where people do not access diverse content and sources of information (information polarization). By utilizing social bots instead of human volunteers and focusing more on information polarization rather than opinion-based, the researchers concluded that there are two essential elements of a filter bubble: a large concentration of users around a single topic and a uni-directional, star-like structure that impacts key information flows.
In June 2018, the platform DuckDuckGo conducted a research study on the Google Web Browser Platform. For this study, 87 adults in various locations around the continental United States googled three keywords at the exact same time: immigration, gun control, and vaccinations. Even in private browsing mode, most people saw results unique to them. Google included certain links for some that it did not include for other participants, and the News and Videos infoboxes showed significant variation. Google publicly disputed these results saying that Search Engine Results Page (SERP) personalization is mostly a myth. Google Search Liaison, Danny Sullivan, stated that "Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn't the case. Results can differ, but usually for non-personalized reasons."[67]
When filter bubbles are in place, they can create specific moments that scientists call 'Whoa' moments. A 'Whoa' moment is when an article, ad, post, etc., appears on your computer that is in relation to a current action or current use of an object. Scientists discovered this term after a young woman was performing her daily routine, which included drinking coffee when she opened her computer and noticed an advertisement for the same brand of coffee that she was drinking. "Sat down and opened up Facebook this morning while having my coffee, and there they were two ads forNespresso. Kind of a 'whoa' moment when the product you're drinking pops up on the screen in front of you."[68]"Whoa" moments occur when people are "found." Which means advertisement algorithms target specific users based on their "click behavior" to increase their sale revenue.
Several designers have developed tools to counteract the effects of filter bubbles (see§ Countermeasures).[69]Swiss radio stationSRFvoted the wordfilterblase(the German translation of filter bubble) word of the year 2016.[70]
InThe Filter Bubble: What the Internet Is Hiding from You,[71]internet activist Eli Pariser highlights how the increasing occurrence of filter bubbles further emphasizes the value of one's bridgingsocial capitalas defined by Robert Putman. Pariser argues that filter bubbles reinforce a sense of social homogeneity, which weakens ties between people with potentially diverging interests and viewpoints.[72]In that sense, high bridging capital may promote social inclusion by increasing our exposure to a space that goes beyond self-interests. Fostering one's bridging capital, such as by connecting with more people in an informal setting, may be an effective way to reduce the filter bubble phenomenon.
Users can take many actions to burst through their filter bubbles, for example by making a conscious effort to evaluate what information they are exposing themselves to, and by thinking critically about whether they are engaging with a broad range of content.[73]Users can consciously avoid news sources that are unverifiable or weak. Chris Glushko, the VP of Marketing at IAB, advocates usingfact-checkingsites to identify fake news.[74]Technology can also play a valuable role in combating filter bubbles.[75]
Some browserplug-insare aimed to help people step out of their filter bubbles and make them aware of their personal perspectives; thus, these media show content that contradicts with their beliefs and opinions. In addition to plug-ins, there are apps created with the mission of encouraging users to open their echo chambers. News apps such asRead Across the Aislenudge users to read different perspectives if their reading pattern is biased towards one side/ideology.[76]Although apps and plug-ins are tools humans can use, Eli Pariser stated "certainly, there is some individual responsibility here to really seek out new sources and people who aren't like you."[52]
Since web-based advertising can further the effect of the filter bubbles by exposing users to more of the same content, users can block much advertising by deleting their search history, turning off targeted ads, and downloading browser extensions. Some use anonymous or non-personalized search engines such asYaCy,DuckDuckGo,Qwant,Startpage.com,Disconnect, andSearxin order to prevent companies from gathering their web-search data. Swiss dailyNeue Zürcher Zeitungis beta-testing a personalized news engine app which uses machine learning to guess what content a user is interested in, while "always including an element of surprise"; the idea is to mix in stories which a user is unlikely to have followed in the past.[77]
The European Union is taking measures to lessen the effect of the filter bubble. TheEuropean Parliamentis sponsoring inquiries into how filter bubbles affect people's ability to access diverse news.[78]Additionally, it introduced a program aimed to educate citizens about social media.[79]In the U.S., the CSCW panel suggests the use of news aggregator apps to broaden media consumers news intake. News aggregator apps scan all current news articles and direct you to different viewpoints regarding a certain topic. Users can also use a diversely-aware news balancer which visually shows the media consumer if they are leaning left or right when it comes to reading the news, indicating right-leaning with a bigger red bar or left-leaning with a bigger blue bar. A study evaluating this news balancer found "a small but noticeable change in reading behavior, toward more balanced exposure, among users seeing the feedback, as compared to a control group".[80]
In light of recent concerns about information filtering on social media, Facebook acknowledged the presence of filter bubbles and has taken strides toward removing them.[81]In January 2017, Facebook removed personalization from its Trending Topics list in response to problems with some users not seeing highly talked-about events there.[82]Facebook's strategy is to reverse the Related Articles feature that it had implemented in 2013, which would post related news stories after the user read a shared article. Now, the revamped strategy would flip this process and post articles from different perspectives on the same topic. Facebook is also attempting to go through a vetting process whereby only articles from reputable sources will be shown. Along with the founder ofCraigslistand a few others, Facebook has invested $14 million into efforts "to increase trust in journalism around the world, and to better inform the public conversation".[81]The idea is that even if people are only reading posts shared from their friends, at least these posts will be credible.
Similarly, Google, as of January 30, 2018, has also acknowledged the existence of a filter bubble difficulties within its platform. Because current Google searches pull algorithmically ranked results based upon "authoritativeness" and "relevancy" which show and hide certain search results, Google is seeking to combat this. By training its search engine to recognize theintentof a search inquiry rather than the literal syntax of the question, Google is attempting to limit the size of filter bubbles. As of now, the initial phase of this training will be introduced in the second quarter of 2018. Questions that involve bias and/or controversial opinions will not be addressed until a later time, prompting a larger problem that exists still: whether the search engine acts either as an arbiter of truth or as a knowledgeable guide by which to make decisions by.[83]
In April 2017 news surfaced that Facebook,Mozilla, and Craigslist contributed to the majority of a $14M donation toCUNY's "News Integrity Initiative," poised at eliminating fake news and creating more honest news media.[84]
Later, in August, Mozilla, makers of theFirefoxweb browser, announced the formation of the Mozilla Information Trust Initiative (MITI). The +MITI would serve as a collective effort to develop products, research, and community-based solutions to combat the effects of filter bubbles and the proliferation of fake news. Mozilla's Open Innovation team leads the initiative, striving to combat misinformation, with a specific focus on the product with regards to literacy, research and creative interventions.[85]
As the popularity ofcloud servicesincreases, personalizedalgorithmsused to construct filter bubbles are expected to become more widespread.[86]Scholars have begun considering the effect of filter bubbles on the users ofsocial mediafrom anethical standpoint, particularly concerning the areas ofpersonal freedom,security, andinformation bias.[87]Filter bubbles in popular social media and personalized search sites can determine the particular content seen by users, often without their direct consent or cognizance,[86]due to the algorithms used to curate that content. Self-created content manifested from behavior patterns can lead to partial information blindness.[88]Critics of the use of filter bubbles speculate that individuals may lose autonomy over their own social media experience and have their identities socially constructed as a result of the pervasiveness of filter bubbles.[86]
Technologists, social media engineers, and computer specialists have also examined the prevalence of filter bubbles.[89]Mark Zuckerberg, founder of Facebook, and Eli Pariser, author ofThe Filter Bubble, have expressed concerns regarding the risks of privacy and information polarization.[90][91]The information of the users of personalized search engines and social media platforms is not private, though some people believe it should be.[90]The concern over privacy has resulted in a debate as to whether or not it is moral for information technologists to take users' online activity and manipulate future exposure to related information.[91]
Some scholars have expressed concerns regarding the effects of filter bubbles on individual and social well-being, i.e. the dissemination of health information to the general public and the potential effects of internet search engines to alter health-related behavior.[16][17][18][92]A 2019 multi-disciplinary book reported research and perspectives on the roles filter bubbles play in regards to health misinformation.[18]Drawing from various fields such as journalism, law, medicine, and health psychology, the book addresses different controversial health beliefs (e.g. alternative medicine and pseudoscience) as well as potential remedies to the negative effects of filter bubbles and echo chambers on different topics in health discourse. A 2016 study on the potential effects of filter bubbles on search engine results related to suicide found that algorithms play an important role in whether or not helplines and similar search results are displayed to users and discussed the implications their research may have for health policies.[17]Another 2016 study from the Croatian Medical journal proposed some strategies for mitigating the potentially harmful effects of filter bubbles on health information, such as: informing the public more about filter bubbles and their associated effects, users choosing to try alternative [to Google] search engines, and more explanation of the processes search engines use to determine their displayed results.[16]
Since the content seen by individual social media users is influenced by algorithms that produce filter bubbles, users of social media platforms are more susceptible toconfirmation bias,[93]and may be exposed to biased, misleading information.[94]Social sortingand other unintentionaldiscriminatory practicesare also anticipated as a result of personalized filtering.[95]
In light of the2016 U.S. presidential electionscholars have likewise expressed concerns about the effect of filter bubbles ondemocracyand democratic processes, as well as the rise of "ideological media".[11]These scholars fear that users will be unable to "[think] beyond [their] narrow self-interest" as filter bubbles create personalized social feeds, isolating them from diverse points of view and their surrounding communities.[96]For this reason, an increasingly discussed possibility is to design social media with more serendipity, that is, to proactively recommend content that lies outside one's filter bubble, including challenging political information and, eventually, to provide empowering filters and tools to users.[97][98][99]A related concern is in fact how filter bubbles contribute to the proliferation of "fake news" and how this may influence political leaning, including how users vote.[11][100][101]
Revelations in March 2018 ofCambridge Analytica's harvesting and use of user data for at least 87 million Facebook profiles during the 2016 presidential election highlight the ethical implications of filter bubbles.[102]Co-founder and whistleblower of Cambridge Analytica Christopher Wylie, detailed how the firm had the ability to develop "psychographic" profiles of those users and use the information to shape their voting behavior.[103]Access to user data by third parties such as Cambridge Analytica can exasperate and amplify existing filter bubbles users have created, artificially increasing existing biases and further divide societies.
Filter bubbles have stemmed from a surge in media personalization, which can trap users. The use of AI to personalize offerings can lead to users viewing only content that reinforces their own viewpoints without challenging them. Social media websites like Facebook may also present content in a way that makes it difficult for users to determine the source of the content, leading them to decide for themselves whether the source is reliable or fake.[104]That can lead to people becoming used to hearing what they want to hear, which can cause them to react more radically when they see an opposing viewpoint. The filter bubble may cause the person to see any opposing viewpoints as incorrect and so could allow the media to force views onto consumers.[105][104][106]
Researches explain that the filter bubble reinforces what one is already thinking.[107]This is why it is extremely important to utilize resources that offer various points of view.[107]
|
https://en.wikipedia.org/wiki/Filter_bubble
|
PageRank(PR) is analgorithmused byGoogle Searchtorankweb pagesin theirsearch engineresults. It is named after both the term "web page" and co-founderLarry Page. PageRank is a way of measuring the importance of website pages. According to Google:
PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.[1]
Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known.[2][3]As of September 24, 2019, all patents associated with PageRank have expired.[4]
PageRank is alink analysisalgorithm and it assigns a numericalweightingto each element of ahyperlinkedsetof documents, such as theWorld Wide Web, with the purpose of "measuring" its relative importance within the set. Thealgorithmmay be applied to any collection of entities withreciprocalquotations and references. The numerical weight that it assigns to any given elementEis referred to as thePageRank of Eand denoted byPR(E).{\displaystyle PR(E).}
A PageRank results from a mathematical algorithm based on theWebgraph, created by all World Wide Web pages as nodes andhyperlinksas edges, taking into consideration authority hubs such ascnn.comormayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is definedrecursivelyand depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself.
Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5]In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6]
Other link-based ranking algorithms for Web pages include theHITS algorithminvented byJon Kleinberg(used byTeomaand nowAsk.com), the IBMCLEVER project, theTrustRankalgorithm, theHummingbirdalgorithm,[7]and theSALSA algorithm.[8]
Theeigenvalueproblem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895,Edmund Landausuggested using it for determining the winner of a chess tournament.[9][10]The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked onscientometricsranking scientific journals,[11]in 1977 byThomas Saatyin his concept ofAnalytic Hierarchy Processwhich weighted alternative choices,[12]and in 1995 by Bradley Love and Steven Sloman as acognitive modelfor concepts, the centrality algorithm.[13][14]
A search engine called "RankDex" from IDD Information Services, designed byRobin Liin 1996, developed a strategy for site-scoring and page-ranking.[15]Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it.[16]RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996.[17]Li filed a patent for the technology in RankDex in 1997; it was granted in 1999.[18]He later used it when he foundedBaiduin China in 2000.[19][20]Google founderLarry Pagereferenced Li's work as a citation in some of his U.S. patents for PageRank.[21][17][22]
Larry Page andSergey Brindeveloped PageRank atStanford Universityin 1996 as part of a research project about a new kind of search engine. An interview withHéctor García-Molina, Stanford Computer Science professor and advisor to Sergey,[23]provides background into the development of the page-rank algorithm.[24]Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.[25]The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google.[5]Rajeev MotwaniandTerry Winogradco-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of theGoogle search engine, published in 1998.[5]Shortly after, Page and Brin foundedGoogle Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.[26]
The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of aweb page.[27][28]The word is a trademark of Google, and the PageRank process has beenpatented(U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million.[29][30]
PageRank was influenced bycitation analysis, early developed byEugene Garfieldin the 1950s at the University of Pennsylvania, and byHyper Search, developed byMassimo Marchioriat theUniversity of Padua. In the same year PageRank was introduced (1998),Jon Kleinbergpublished his work onHITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.[5][31]
The PageRank algorithm outputs aprobability distributionused to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document.
Assume a small universe of four web pages:A,B,C, andD. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume aprobability distributionbetween 0 and 1. Hence the initial value for each page in this example is 0.25.
The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links.
If the only links in the system were from pagesB,C, andDtoA, each link would transfer 0.25 PageRank toAupon the next iteration, for a total of 0.75.
Suppose instead that pageBhad a link to pagesCandA, pageChad a link to pageA, and pageDhad links to all three pages. Thus, upon the first iteration, pageBwould transfer half of its existing value (0.125) to pageAand the other half (0.125) to pageC. PageCwould transfer all of its existing value (0.25) to the only page it links to,A. SinceDhad three outbound links, it would transfer one third of its existing value, or approximately 0.083, toA. At the completion of this iteration, pageAwill have a PageRank of approximately 0.458.
In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound linksL( ).
In the general case, the PageRank value for any pageucan be expressed as:
i.e. the PageRank value for a pageuis dependent on the PageRank values for each pagevcontained in the setBu(the set containing all pages linking to pageu), divided by the numberL(v) of links from pagev.
The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factord. The probability that they instead jump to any random page is1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5]
The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is,
So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion:
The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied byNand the sum becomesN. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[5]and claims by other Google employees[32]support the first variant of the formula above.
Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[5]
Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.
The formula uses a model of arandom surferwho reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as aMarkov chainin which the states are pages, and the transitions are the links between pages – all of which are all equally probable.
If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks anotherURLat random and continues surfing again.
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability,d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows:
wherep1,p2,...,pN{\displaystyle p_{1},p_{2},...,p_{N}}are the pages under consideration,M(pi){\displaystyle M(p_{i})}is the set of pages that link topi{\displaystyle p_{i}},L(pj){\displaystyle L(p_{j})}is the number of outbound links on pagepj{\displaystyle p_{j}}, andN{\displaystyle N}is the total number of pages.
The PageRank values are the entries of the dominant righteigenvectorof the modifiedadjacency matrixrescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is
whereRis the solution of the equation
where the adjacency functionℓ(pi,pj){\displaystyle \ell (p_{i},p_{j})}is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if pagepj{\displaystyle p_{j}}does not link topi{\displaystyle p_{i}}, and normalized such that, for eachj
i.e. the elements of each column sum up to 1, so the matrix is astochastic matrix(for more details see thecomputationsection below). Thus this is a variant of theeigenvector centralitymeasure used commonly innetwork analysis.
Because of the largeeigengapof the modified adjacency matrix above,[33]the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations.
Google's founders, in their original paper,[31]reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear inlogn{\displaystyle \log n}, where n is the size of the network.
As a result ofMarkov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equalt−1{\displaystyle t^{-1}}wheret{\displaystyle t}is theexpectationof the number of clicks (or random jumps) required to get from the page back to itself.
One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such asWikipedia).
Several strategies have been proposed to accelerate the computation of PageRank.[34]
Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept,[citation needed]which purports to determine which documents are actually highly valued by the Web community.
Since December 2007, when it startedactivelypenalizing sites selling paid text links, Google has combattedlink farmsand other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google'strade secrets.
PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as thepower iterationmethod[35][36]or the power method. The basic mathematical operations performed are identical.
Att=0{\displaystyle t=0}, an initial probability distribution is assumed, usually
where N is the total number of pages, andpi;0{\displaystyle p_{i};0}is page i at time 0.
At each time step, the computation, as detailed above, yields
where d is the damping factor,
or in matrix notation
whereRi(t)=PR(pi;t){\displaystyle \mathbf {R} _{i}(t)=PR(p_{i};t)}and1{\displaystyle \mathbf {1} }is the column vector of lengthN{\displaystyle N}containing only ones.
The matrixM{\displaystyle {\mathcal {M}}}is defined as
i.e.,
whereA{\displaystyle A}denotes theadjacency matrixof the graph andK{\displaystyle K}is the diagonal matrix with the outdegrees in the diagonal.
The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some smallϵ{\displaystyle \epsilon }
i.e., when convergence is assumed.
If the matrixM{\displaystyle {\mathcal {M}}}is a transition probability, i.e., column-stochastic andR{\displaystyle \mathbf {R} }is a probability distribution (i.e.,|R|=1{\displaystyle |\mathbf {R} |=1},ER=1{\displaystyle \mathbf {E} \mathbf {R} =\mathbf {1} }whereE{\displaystyle \mathbf {E} }is matrix of all ones), then equation (2) is equivalent to
Hence PageRankR{\displaystyle \mathbf {R} }is the principal eigenvector ofM^{\displaystyle {\widehat {\mathcal {M}}}}. A fast and easy way to compute this is using thepower method: starting with an arbitrary vectorx(0){\displaystyle x(0)}, the operatorM^{\displaystyle {\widehat {\mathcal {M}}}}is applied in succession, i.e.,
until
Note that in equation (3) the matrix on the right-hand side in the parenthesis can be interpreted as
whereP{\displaystyle \mathbf {P} }is an initial probability distribution. n the current case
Finally, ifM{\displaystyle {\mathcal {M}}}has columns with only zero values, they should be replaced with the initial probability vectorP{\displaystyle \mathbf {P} }. In other words,
where the matrixD{\displaystyle {\mathcal {D}}}is defined as
with
In this case, the above two computations usingM{\displaystyle {\mathcal {M}}}only give the same PageRank if their results are normalized:
The PageRank of an undirectedgraphG{\displaystyle G}is statistically close to thedegree distributionof the graphG{\displaystyle G},[37]but they are generally not identical: IfR{\displaystyle R}is the PageRank vector defined above, andD{\displaystyle D}is the degree distribution vector
wheredeg(pi){\displaystyle \deg(p_{i})}denotes the degree of vertexpi{\displaystyle p_{i}}, andE{\displaystyle E}is the edge-set of the graph, then, withY=1N1{\displaystyle Y={1 \over N}\mathbf {1} },[38]shows that:
1−d1+d‖Y−D‖1≤‖R−D‖1≤‖Y−D‖1,{\displaystyle {1-d \over 1+d}\|Y-D\|_{1}\leq \|R-D\|_{1}\leq \|Y-D\|_{1},}
that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree.
A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis.[39]In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to consideringbipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perron–Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate.
Sarma et al. describe tworandom walk-baseddistributed algorithmsfor computing PageRank of nodes in a network.[40]One algorithm takesO(logn/ϵ){\displaystyle O(\log n/\epsilon )}rounds with high probability on any graph (directed or undirected), where n is the network size andϵ{\displaystyle \epsilon }is the reset probability (1−ϵ{\displaystyle 1-\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takesO(logn/ϵ){\displaystyle O({\sqrt {\log n}}/\epsilon )}rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size.
TheGoogle Toolbarlong had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The "Toolbar Pagerank" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from itsWebmaster Toolssection, saying that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most importantmetricfor them to track, which is simply not true."[41]
The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming.[42]In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate.[43]On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar,[44]though the PageRank continued to be used internally to rank content in search results.[45]
Thesearch engine results page(SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200).[46][unreliable source?]Search engine optimization(SEO) is aimed at influencing the SERP rank for a website or a set of web pages.
Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword.[47]The PageRank of the HomePage of a website is the best indication Google offers for website authority.[48]
After the introduction ofGoogle Placesinto the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[49]When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google.[50]
TheGoogle DirectoryPageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.[51]
It was known that the PageRank shown in the Toolbar could easily bespoofed. Redirection from one page to another, either via aHTTP 302response or a "Refresh"meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. Spoofing can usually be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection.
Forsearch engine optimizationpurposes, some companies offer to sell high PageRank links to webmasters.[52]As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling[53]is intensely debated across the Webmaster community. Google advised webmasters to use thenofollowHTML attributevalue on paid links. According toMatt Cutts, Google is concerned about webmasters who try togame the system, and thereby reduce the quality and relevance of Google search results.[52]
In 2019, Google announced two additional link attributes providing hints about which links to consider or exclude within Search:rel="ugc"as a tag for user-generated content, such as comments; andrel="sponsored"as a tag for advertisements or other types of sponsored content. Multiplerelvalues are also allowed, for example,rel="ugc sponsored"can be used to hint that the link came from user-generated content and is sponsored.[54]
Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings.[55]
A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query,Q={q1,q2,⋯}{\displaystyle Q=\{q1,q2,\cdots \}}, the surfer selects aq{\displaystyle q}according to some probability distribution,P(q){\displaystyle P(q)}, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.[56]
The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.[57]
PageRank has been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index.[58]
For the analysis of protein networks in biology PageRank is also a useful tool.[59][60]
In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[61]
A similar newer use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[62]
A version of PageRank has recently been proposed as a replacement for the traditionalInstitute for Scientific Information(ISI)impact factor,[63]and implemented atEigenfactoras well as atSCImago. Instead of merely counting total citations to a journal, the "importance" of each citation is determined in a PageRank fashion.
Inneuroscience, the PageRank of aneuronin a neural network has been found to correlate with its relative firing rate.[64]
Personalized PageRank is used byTwitterto present users with other accounts they may wish to follow.[65]
Swiftype's site search product builds a "PageRank that's specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page.[66]
AWeb crawlermay use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers[67]that were used in the creation of Google isEfficient crawling through URL ordering,[68]which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.
The PageRank may also be used as a methodology to measure the apparent impact of a community like theBlogosphereon the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of theScale-free networkparadigm.[citation needed]
In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[69][70]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used.
In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA;[71]individual soccer players;[72]and athletes in the Diamond League.[73]
PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[74][75]Inlexical semanticsit has been used to performWord Sense Disambiguation,[76]Semantic similarity,[77]and also to automatically rankWordNetsynsetsaccording to how strongly they possess a given semantic property, such as positivity or negativity.[78]
How a traffic system changes its operational mode can be described by transitions between quasi-stationary states in correlation structures of traffic flow. PageRank has been used to identify and explore the dominant states among these quasi-stationary states in traffic systems.[79]
In early 2005, Google implemented a new value, "nofollow",[80]for therelattribute of HTML link and anchor elements, so that website developers andbloggerscan make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combatspamdexing.
As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See:Spam in blogs#nofollow)
In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[81]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[82]
|
https://en.wikipedia.org/wiki/Page_rank
|
Preference elicitationrefers to the problem of developing adecision support systemcapable of generatingrecommendationsto a user, thus assisting in decision making. It is important for such a system to model user's preferences accurately, find hidden preferences and avoid redundancy. This problem is sometimes studied as a computational learning theory problem. Another approach for formulating this problem is apartially observable Markov decision process. The formulation of this problem is also dependent upon the context of the area in which it is studied.
With the explosion of on-line information new opportunities for finding and using electronic data have been generated, these changes have also brought the task of eliciting useful information to the forefront. Researchers as well as major online catalog companies have come up with algorithms andprototypesof systems that can aid a user to be able to navigate through a complex and huge information space using some information from the user in the form of answers to certain queries or ratings to certain items etc. depending upon the domain of the information space.
|
https://en.wikipedia.org/wiki/Preference_elicitation
|
Psychographic filteringis located within a branch ofcollaborative filtering(user-based) which anticipatespreferencesbased upon information received from astatistical survey, aquestionnaire, or other forms ofsocial research.[1]The termPsychographicis derived from Psychography which is the study of associating and classifying people according to their psychological characteristics.[2]Inmarketingor social research, information received from a participant’s response is compared with other participants’ responses and the comparison of that research is designed to predict preferences based upon similarities or differences in perception.[3]The participant should be inclined to share perceptions with people who have similar preferences. Suggestions are then provided to the participant based on their predicted preferences. Psychographic filtering differs from collaborative filtering in that it classifies similar people into a specific psychographic profile where predictions of preferences are based upon that psychographic profile type.[3]Examples of psychological characteristics which determine a psychographic profile are personality,lifestyle,value system,behavior,experienceandattitude.
Research data is collected and analyzed throughquantitative methods, yet the manner in which the questions are presented share similarities used withinqualitative methods. Participants respond to questions offering perceived choice. The participants’ choice is reflective of their psychological characteristics. This perceived choice (presented throughout the research method) is designed to score a participant and categorize that participant according to their respective score. The categories (psychographic profiles) used to assign people, reflect personality characteristics which the researchers can analyze and use for their particular purposes.
Psychographic filtering and collaborative filtering are still within experimental stages and therefore have been not been extensively used.[3]The techniques are most effective when they are used to indicate preference for a single, constant item (i.e. a horror book written by one author) rather than recommending a composition of characteristics (i.e. a newspaper article on war) which varies in perspective from publisher to publisher.[3]For the item to be perceived in accordance with the psychographic profile, it must be defined within a specific category, opposed to being encompassing of many categories (where many preferences overlap).[3]Major problems with this type of research are whether it can be applied to items which are constantly changing in scope and updated regularly and whether people will participate sufficiently to create psychographic profiles.
|
https://en.wikipedia.org/wiki/Psychographic_filtering
|
Arecommender system (RecSys), or arecommendation system(sometimes replacingsystemwith terms such asplatform,engine, oralgorithm), sometimes only called "the algorithm" or "algorithm"[1]is a subclass ofinformation filtering systemthat provides suggestions for items that are most pertinent to a particular user.[2][3][4]Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[2][5]Modern recommendation systems such as those used on large social media sites, make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user, and tailor their feed accordingly.[6]
Typically, the suggestions refer to variousdecision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[2]Recommender systems are used in a variety of areas, with commonly recognised examples taking the form ofplaylistgenerators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[7][8]These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants andonline dating. Recommender systems have also been developed to explore research articles and experts,[9]collaborators,[10]and financial services.[11]
Acontent discovery platformis an implementedsoftwarerecommendationplatformwhich uses recommender system tools. It utilizes usermetadatain order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content towebsites,mobile devicesandset-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles andacademic journalarticles[12]to television.[13]As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[12]
Recommender systems usually make use of either or bothcollaborative filteringand content-based filtering, as well as other systems such asknowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[14]Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[15]
The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems,Last.fmandPandora Radio.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of thecold startproblem, and is common in collaborative filtering systems.[17][18][19][20][21][22]Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative tosearch algorithmssince they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems have been the focus of several granted patents,[23][24][25][26][27]and there are more than 50 software libraries[28]that support the development of recommender systems including LensKit,[29][30]RecBole,[31]ReChorus[32]and RecPack.[33]
Elaine Richcreated the first recommender system in 1979, called Grundy.[34][35]She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.
Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report byJussi Karlgrenat Columbia University,[36]and implemented at scale and worked through in technical reports and publications from 1994 onwards byJussi Karlgren, then atSICS,[37][38]and research groups led byPattie Maesat MIT,[39]Will Hill at Bellcore,[40]andPaul Resnick, also at MIT,[41][5]whose work with GroupLens was awarded the 2010ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective.[42]Adomaviciusprovided a new, alternate overview of recommender systems.[43]Herlocker provides an additional overview of evaluation techniques for recommender systems,[44]andBeelet al. discussed the problems of offline evaluations.[45]Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[46][47]
One approach to the design of recommender systems that has wide use iscollaborative filtering.[48]Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[49]while that of model-based approaches ismatrix factorization (recommender systems).[50]
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, thek-nearest neighbor(k-NN) approach[51]and thePearson Correlationas first implemented by Allen.[52]
When building a model from a user's behavior, a distinction is often made between explicit andimplicitforms ofdata collection.
Examples of explicit data collection include the following:
Examples ofimplicit data collectioninclude the following:
Collaborative filtering approaches often suffer from three problems:cold start, scalability, and sparsity.[54]
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized byAmazon.com's recommender system.[56]
Manysocial networksoriginally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[2]Collaborative filtering is still used as part of hybrid systems.
Another common approach when designing recommender systems iscontent-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[57][58]These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and auser profileis built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots ininformation retrievalandinformation filteringresearch.
To create auser profile, the system mostly focuses on two types of information:
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is thetf–idfrepresentation (also called vector space representation).[59]The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such asBayesian Classifiers,cluster analysis,decision trees, andartificial neural networksin order to estimate the probability that the user is going to like the item.[60]
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improvedmetadataof items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques includingtext mining,information retrieval,sentiment analysis(see alsoMultimodal sentiment analysis) anddeep learning.[61]
Most recommender systems now use a hybrid approach, combiningcollaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[43]Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck inknowledge-basedapproaches.[62]
Netflixis a good example of the use of hybrid recommender systems.[63]The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
These recommender systems use the interactions of a user within a session[65]to generate recommendations. Session-based recommender systems are used at YouTube[66]and Amazon.[67]These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such asrecurrent neural networks,[65][68]transformers,[69]and other deep-learning-based approaches.[70][71]
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[66][72][73]One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[74]
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[75]See this chapter[76]for an extended introduction.
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue isDRARS, a system which models the context-aware recommendation as abandit problem. This system combines a content-based technique and a contextual bandit algorithm.[77]
Mobile recommender systems make use of internet-accessingsmartphonesto offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[78]
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[79]Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such asUberandLyftto generate driving routes for taxi drivers in a city.[78]This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
Generative recommenders (GR) represent an approach that transforms recommendation tasks intosequential transductionproblems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units),[80]high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a customself-attentionapproach instead oftraditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previousTransformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.
One of the events that energized research in recommender systems was theNetflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[81]
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[82]
Predictive accuracy is substantially improved when blending multiple predictors.Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique.Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place foundedGravity R&D, a recommendation engine that's active in theRecSys community.[81][83]4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on theInternet Movie Database (IMDb).[84]As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and theVideo Privacy Protection Actby releasing the datasets.[85]This, as well as concerns from theFederal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[86]
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure theeffectivenessof recommender systems, and compare different approaches, three types ofevaluationsare available: user studies,online evaluations (A/B tests), and offline evaluations.[45]
The commonly used metrics are themean squared errorandroot mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such asprecision and recallorDCGare useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[87]However, many of the classic evaluation measures are highly criticized.[88]
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such asconversion rateorclick-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[89]
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[90][91][92][45]For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[92][93]A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[94]Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[95]This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[90][96]Researchers have concluded that the results of offline evaluations should be viewed critically.[97]
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to areproducibility crisisin recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW,RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[110][111][112]More recent work on benchmarking a set of the same methods came to qualitatively very different results[113]whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[114]RecSys Challenge.[115]Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[116][66][67]The topic of reproducibility is not new in recommender systems. By 2011,Ekstrand,Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[117]Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[118]As a consequence, much research about recommender systems can be considered as not reproducible.[119]Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems.SaidandBellogínconducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[120]Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[119]"(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
Artificial intelligence(AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[121]The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[122]These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.
Recommendation systems widely adopt AI techniques such asmachine learning,deep learning, andnatural language processing.[123]These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed]
Collaborative filtering(CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[124]Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."
There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is calledK-nearest neighbors. The ideas are as follows:
Anartificial neural network(ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[125]Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be ablack-boxmodel. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.
ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[123]Following are some examples:
The Two-Tower model is a neural architecture[126]commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[127]It consists of two neural networks:
The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such asdot productorcosine similarity, is used to measure relevance between a user and an item.
This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.
Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[128]It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, includinglatent semantic analysis(LSA),singular value decomposition(SVD),latent Dirichlet allocation(LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.
An emerging market for content discovery platforms is academic content.[129][130]Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[12]Though traditional tools academic search tools such asGoogle ScholarorPubMedprovide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.
Google Scholar provides an 'Updates' tool that suggests articles by using astatistical modelthat takes a researchers' authorized paper and citations as input.[12]Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[12]
In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead ofpolarizing.[131][132]Examples includePolisand Remesh which have been used around the world to help find more consensus around specific political issues.[132]Twitterhas also used this approach for managing itscommunity notes,[133]whichYouTubeplanned to pilot in 2024.[134][135]Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empoweringdeliberative groupsthat are representative of the platform's users to control the design and implementation of the algorithm.[136]
As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[137]Withbroadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well asinternet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
|
https://en.wikipedia.org/wiki/Recommendation_system
|
Areputation systemis a program oralgorithmthat allow users of anonline communityto rate each other in order to buildtrustthroughreputation. Some common uses of these systems can be found onE-commercewebsites such aseBay,Amazon.com, andEtsyas well as online advice communities such asStack Exchange.[1]These reputation systems represent a significant trend in "decision support for Internet mediated service provisions".[2]With the popularity of online communities for shopping, advice, and exchange of other important information, reputation systems are becoming vitally important to the online experience. The idea of reputation systems is that even if the consumer can't physically try a product or service, or see the person providing information, that they can be confident in the outcome of the exchange through trust built byrecommender systems.[2]
Collaborative filtering, used most commonly in recommender systems, are related to reputation systems in that they both collect ratings from members of a community.[2]The core difference between reputation systems and collaborative filtering is the ways in which they use userfeedback. In collaborative filtering, the goal is to find similarities between users in order to recommend products to customers. The role of reputation systems, in contrast, is to gather a collective opinion in order to build trust between users of an online community.
Howard Rheingoldstates that online reputation systems are "computer-based technologies that make it possible to manipulate in new and powerful ways an old and essential human trait".[3]Rheingold says that these systems arose as a result of the need for Internet users to gain trust in the individuals they transact with online. The trait he notes in human groups is that social functions such as gossip "keeps us up to date on who to trust, who other people trust, who is important, and who decides who is important". Internet sites such aseBayandAmazon, he argues, seek to make use of this social trait and are "built around the contributions of millions of customers, enhanced by reputation systems that police the quality of the content and transactions exchanged through the site".
The emergingsharing economyincreases the importance of trust inpeer-to-peermarketplacesand services.[4]Users can build up reputation and trust in individual systems but usually don't have the ability to carry those reputations to other systems. Rachel Botsman and Roo Rogers argue in their bookWhat's Mine is Yours(2010),[5]that "it is only a matter of time before there is some form of network that aggregatesreputation capitalacross multiple forms of Collaborative Consumption". These systems, often referred to as reputation banks, try to give users a platform to manage their reputation capital across multiple systems.
The main function of reputation systems is to build a sense of trust among users of online communities. As withbrick and mortar stores, trust and reputation can be built throughcustomer feedback. Paul Resnick from the Association for Computing Machinery describes three properties that are necessary for reputation systems to operate effectively.[2]
These three properties are critically important in building reliable reputations, and all revolve around one important element: user feedback. User feedback in reputation systems, whether it be in the form of comments, ratings, or recommendations, is a valuable piece of information. Without user feedback, reputation systems cannot sustain an environment of trust.
Eliciting user feedback can have three related problems.
Other pitfalls to effective reputation systems described by A. Josang et al. include change of identities and discrimination. Again these ideas tie back to the idea of regulating user actions in order to gain accurate and consistent user feedback. When analyzing different types of reputation systems it is important to look at these specific features in order to determine the effectiveness of each system.
TheIETFproposed a protocol to exchange reputation data.[6]It was originally aimed at email applications, but it was subsequently developed as a general architecture for a reputation-based service, followed by an email-specific part.[7]However, the workhorse of email reputation remains with DNSxL's, which do not follow that protocol.[8]Those specification don't say how to collect feedback —in fact, thegranularityof email sending entities makes it impractical to collect feedback directly from recipients— but are only concerned with reputation query/response methods.
Highreputation capitaloften confers benefits upon the holder. For example, a wide range of studies have found a positive correlation between seller rating and selling price oneBay,[10]indicating that high reputation can help users obtain more money for their items. Highproduct reviewson online marketplaces can also help drive higher sales volumes.
Abstract reputation can be used as a kind of resource, to be traded away for short-term gains or built up by investing effort. For example, a company with a good reputation may sell lower-quality products for higher profit until their reputation falls, or they may sell higher-quality products to increase their reputation.[11]Some reputation systems go further, making it explicitly possible to spend reputation within the system to derive a benefit. For example, on theStack Overflowcommunity, reputation points can be spent on question "bounties" to incentivize other users to answer the question.[12]
Even without an explicit spending mechanism in place, reputation systems often make it easier for users to spend their reputation without harming it excessively. For example, aridesharing companydriver with a high ride acceptance score (a metric often used for driver reputation) may opt to be more selective about his or her clientele, decreasing the driver's acceptance score but improving his or her driving experience. With the explicit feedback provided by the service, drivers can carefully manage their selectivity to avoid being penalized too heavily.
Reputation systems are in general vulnerable to attacks, and many types of attacks are possible.[13]As the reputation system tries to generate an accurate assessment based on various factors including but not limited to unpredictable user size and potential adversarial environments, the attacks and defense mechanisms play an important role in the reputation systems.[14]
Attack classification of reputation system is based on identifying which system components and design choices are the targets of attacks. While the defense mechanisms are concluded based on existing reputation systems.
The capability of the attacker is determined by several characteristics, e.g., the location of the attacker related to the system (insider attacker vs. outsider attacker). An insider is an entity who has legitimate access to the system and can participate according to the system specifications, while an outsider is any unauthorized entity in the system who may or may not be identifiable.
As the outsider attack is much more similar to other attacks in a computer system environment, the insider attack gets more focus in the reputation system. Usually, there are some common assumptions: the attackers are motivated either by selfish or malicious intent and the attackers can either work alone or in coalitions.
Attacks against reputation systems are classified based on the goals and methods of the attacker.
Here are some strategies to prevent the above attacks.[17]
|
https://en.wikipedia.org/wiki/Reputation_system
|
Robust collaborative filtering, orattack-resistant collaborative filtering, refers to algorithms or techniques that aim to makecollaborative filteringmore robust against efforts of manipulation, while hopefully maintaining recommendation quality. In general, these efforts of manipulation usually refer to shilling attacks, also called profile injection attacks. Collaborative filtering predicts a user's rating to items by finding similar users and looking at their ratings, and because it is possible to create nearly indefinite copies of user profiles in an online system, collaborative filtering becomes vulnerable when multiple copies of fake profiles are introduced to the system. There are several different approaches suggested to improve robustness of both model-based and memory-based collaborative filtering. However, robust collaborative filtering techniques are still an active research field, and major applications of them are yet to come.
One of the biggest challenges to collaborative filtering is shilling attacks. That is, malicious users or a competitor may deliberately inject certain number of fake profiles to the system (typically 1~5%) in such a way that they can affect the recommendation quality or even bias the predicted ratings on behalf of their advantages. Some of the main shilling attack strategies are random attacks, average attacks, bandwagon attacks, and segment-focused attacks.
Random attacks insert profiles that give random ratings to a subset of items; average attacks give mean rating of each item.[1]Bandwagon and segment-focused attacks are newer and more sophisticated attack model. Bandwagon attack profiles give random rating to a subset of items and maximum rating to very popular items, in an effort to increase the chances that these fake profiles have many neighbors. Segment-focused attack is similar to bandwagon attack model, but it gives maximum rating to items that are expected to be highly rated by target user group, instead of frequently rated.[2]
In general, item-based collaborative filtering is known to be more robust than user-based collaborative filtering. However, item-based collaborative filtering are still not completely immune to bandwagon and segment attacks.
Robust collaborative filtering typically works as follows:
This is a detection method suggested by Gao et al. to make memory-based collaborative filtering more robust.[3]Some popular metrics used in collaborative filtering to measure user similarity are Pearson correlation coefficient, interest similarity, and cosine distance. (refer toMemory-based CFfor definitions) A recommender system can detect attacks by exploiting the fact that the distributions of these metrics differ when there are spam users in the system. Because shilling attacks inject not just single fake profile but a large number of similar fake profiles, these spam users will have unusually high similarity than normal users do.
The entire system works like this. Given a rating matrix, it runs adensity-based clustering algorithmon the user relationship metrics to detect spam users, and gives weight of 0 to spam users and weight of 1 to
normal users. That is, the system will only consider ratings from normal users when computing predictions. The rest of the algorithm works exactly same as normal item-based collaborative filtering.
According to experimental results on MovieLens data, this robust CF approach preserves accuracy compared to normal item-based CF, but is more stable. Prediction result for normal CF shifts by 30-40% when spam user profiles are injected, but this robust approach shifts only about 5-10%.
|
https://en.wikipedia.org/wiki/Robust_collaborative_filtering
|
Similarity searchis the most general term used for a range of mechanisms which share the principle of searching (typically very large) spaces of objects where the only available comparator is thesimilaritybetween any pair of objects. This is becoming increasingly important in an age of large information repositories where the objects contained do not possess any natural order, for example large collections of images, sounds and other sophisticated digital objects.
Nearest neighbor searchandrange queriesare important subclasses of similarity search, and a number of solutions exist. Research in similarity search is dominated by the inherent problems of searching over complex objects. Such objects cause most known techniques to lose traction over large collections, due to a manifestation of the so-calledcurse of dimensionality, and there are still many unsolved problems. Unfortunately, in many cases where similarity search is necessary, the objects are inherently complex.
The most general approach to similarity search relies upon the mathematical notion ofmetric space, which allows the construction of efficient index structures in order to achieve scalability in the search domain.
Similarity search evolved independently in a number of different scientific and computing contexts, according to various needs. In 2008 a few leading researchers in the field felt strongly that the subject should be a research topic in its own right, to allow focus on the general issues applicable across the many diverse domains of its use. This resulted in the formation of theSISAPfoundation, whose main activity is a series of annual international conferences on the generic topic.
Metric search is similarity search which takes place withinmetric spaces. While thesemimetricproperties are more or less necessary for any kind of search to be meaningful, the further property oftriangle inequalityis useful for engineering, rather than conceptual, purposes.
A simple corollary of triangle inequality is that, if any two objects within the space are far apart, then no third object can be close to both. This observation allows data structures to be built, based on distances measured within the data collection, which allow subsets of the data to be excluded when a query is executed. As a simple example, areferenceobject can be chosen from the data set, and the remainder of the set divided into two parts based on distance to this object: those close to the reference object in setA, and those far from the object in setB. If, when the set is later queried, the distance from the query to the reference object is large, then none of the objects within setAcan be very close to the query; if it is very small, then no object within setBcan be close to the query.
Once such situations are quantified and studied, many different metric indexing structures can be designed, variously suitable for different types of collections. The research domain of metric search can thus be characterised as the study of pre-processing algorithms over large and relatively static collections of data which, using the properties of metric spaces, allow efficient similarity search to be performed.
A popular approach for similarity search islocality sensitive hashing(LSH).[1]Ithashesinput items so that similar items map to the same "buckets" in memory with high probability (the number of buckets being much smaller than the universe of possible input items). It is often applied in nearest neighbor search on large scale high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases.[2]
|
https://en.wikipedia.org/wiki/Similarity_search
|
Slope Oneis a family of algorithms used forcollaborative filtering, introduced in a 2005 paper by Daniel Lemire and Anna Maclachlan.[1]Arguably, it is the simplest form of non-trivialitem-based collaborative filteringbased on ratings. Their simplicity makes it especially easy to implement them efficiently while their accuracy is often on par with more complicated and computationally expensive algorithms.[1][2]They have also been used as building blocks to improve other algorithms.[3][4][5][6][7][8][9]They are part of major open-source libraries such asApache MahoutandEasyrec.
When ratings of items are available, such as is the case when people are given the option of ratings resources (between 1 and 5, for example), collaborative filtering aims to predict the ratings of one individual based on his past ratings and on a (large) database of ratings contributed by other users.
Example: Can we predict the rating an individual would give to the new Celine Dion album given that he gave the Beatles 5 out of 5?
In this context, item-based collaborative filtering[10][11]predicts the ratings on one item based on the ratings on another item, typically usinglinear regression(f(x)=ax+b{\displaystyle f(x)=ax+b}). Hence, if there are 1,000 items, there could be up to 1,000,000 linear regressions to be learned, and so, up to 2,000,000 regressors. This approach may suffer from severeoverfitting[1]unless we select only the pairs of items for which several users have rated both items.
A better alternative may be to learn a simpler predictor such asf(x)=x+b{\displaystyle f(x)=x+b}: experiments show that this simpler predictor (called Slope One) sometimes outperforms[1]linear regression while having half the number of regressors. This simplified approach also reduces storage requirements and latency.
Item-based collaborative filtering is just one form ofcollaborative filtering. Other alternatives include user-based collaborative filtering where relationships between users are of interest, instead. However, item-based collaborative filtering is especially scalable with respect to the number of users.
We are not always given ratings: when the users provide only binary data (the item was purchased or not), then Slope One and other
rating-based algorithm do not apply[citation needed].
Examples of binary item-based collaborative filtering include Amazon'sitem-to-itempatented algorithm[12]which computes the cosine between binary vectors representing the purchases in a user-item matrix.
Being arguably simpler than even Slope One, the Item-to-Item algorithm offers an interesting point of reference. Consider an example.
In this case, the cosine between items 1 and 2 is:
(1,0,0)⋅(0,1,1)‖(1,0,0)‖‖(0,1,1)‖=0{\displaystyle {\frac {(1,0,0)\cdot (0,1,1)}{\Vert (1,0,0)\Vert \Vert (0,1,1)\Vert }}=0},
The cosine between items 1 and 3 is:
(1,0,0)⋅(1,1,0)‖(1,0,0)‖‖(1,1,0)‖=12{\displaystyle {\frac {(1,0,0)\cdot (1,1,0)}{\Vert (1,0,0)\Vert \Vert (1,1,0)\Vert }}={\frac {1}{\sqrt {2}}}},
Whereas the cosine between items 2 and 3 is:
(0,1,1)⋅(1,1,0)‖(0,1,1)‖‖(1,1,0)‖=12{\displaystyle {\frac {(0,1,1)\cdot (1,1,0)}{\Vert (0,1,1)\Vert \Vert (1,1,0)\Vert }}={\frac {1}{2}}}.
Hence, a user visiting item 1 would receive item 3 as a recommendation, a user visiting item 2 would receive item 3 as a recommendation, and finally, a user visiting item 3 would receive item 1 (and then item 2) as a recommendation. The model uses a single parameter per pair of item (the cosine) to make the recommendation. Hence, if there arenitems, up ton(n-1)/2cosines need to be computed and stored.
To drastically reduceoverfitting, improve performance and ease implementation, theSlope Onefamily of easily implemented Item-based Rating-Basedcollaborative filteringalgorithms was proposed. Essentially, instead of using linear regression from one item's ratings to another item's ratings (f(x)=ax+b{\displaystyle f(x)=ax+b}), it uses a simpler form of regression with a single free parameter (f(x)=x+b{\displaystyle f(x)=x+b}). The free parameter is then simply the average difference between the two items' ratings. It was shown to be much more accurate than linear regression in some instances,[1]and it takes half the storage or less.
Example:
For a more realistic example, consider the following table.
In this case, the average difference in ratings between item B and A is (-2+1)/2 = -0.5. Hence, on average, item A is rated above item B by 0.5. Similarly, the average difference between item C and A is -3. Hence, if we attempt to predict the rating of Lucy for item A using her rating for item B, we get 2+0.5 = 2.5. Similarly, if we try to predict her rating for item A using her rating of item C, we get 5+3 = 8.
If a user rated several items, the predictions are simply combined using a weighted average where a good choice for the weight is the number of users having rated both items. In the above example, both John and Mark rated items A and B, hence weight of 2 and only John rated both items A and C, hence weight of 1 as shown below. we would predict the following rating for Lucy on item A as :
2×2.5+1×82+1=133=4.33{\displaystyle {\frac {2\times 2.5+1\times 8}{2+1}}={\frac {13}{3}}=4.33}
Hence, givennitems, to implement Slope One, all that is needed is to compute and store the average differences and the number of common ratings for each of then2pairs of items.
Suppose there arenitems,musers, andNratings. Computing the average rating differences for each pair of items requires up ton(n-1)/2units of storage, and up tom n2time steps. This computational bound may be pessimistic: if we assume that users have rated up toyitems, then it is possible to compute the differences in no more thann2+my2. If a user has enteredxratings, predicting a single rating requiresxtime steps, and predicting all of his missing ratings requires up to (n-x)xtime steps. Updating the database when a user has already enteredxratings, and enters a new one, requiresxtime steps.
It is possible to reduce storage requirements by partitioning the data (seePartition (database)) or by using sparse storage: pairs of items having no (or few) corating users can be omitted.
|
https://en.wikipedia.org/wiki/Slope_One
|
Social translucence(also referred associal awareness) is a term that was proposed by Thomas Erickson andWendy Kelloggto refer to "design digital systems that support coherent behavior by making participants and their activities visible to one another".
Social translucence represents a tool fortransparencyin socio-technical systems, which function is to
Social translucence is, in particular, a core element inonline social networkingsuch asFacebookorLinkedIn, in which they intervene in the possibility for people to expose theironline identity, but also in the creation of awareness of other people activities, that are for instance present in theactivity feedsthat these systems make available.
Social translucence mechanisms have been made available in manyweb 2.0systems such as:
Participation of people in online communities, in general, differ from their participatory behavior in real-world collective contexts. Humans in daily life are used to making use of "social cues" for guiding their decisions and actions e.g. if a group of people is looking for a good restaurant to have lunch, it is very likely that they will choose to enter a local that have some customers inside instead of one that it is empty (the more crowded restaurant could reflect its popularity and in consequence, its quality of service). However, in online social environments, it is not straightforward how to access to these sources of information which are normally being logged in the systems, but this is not disclosed to the users.
There are some theories that explain how this social translucence can affect the behavior of people in real-life scenarios. The American philosopherGeorge Herbert Meadstates that humans are social creatures, in the sense that people's actions cannot be isolated from the behavior of the whole collective they are part of because every individuals' acts are influenced by larger social practices that act as a general behavior's framework.[2]In his performance framework, the Canadian sociologistErving Goffmanpostulates that in everyday social interactions individuals perform their actions by collecting information from others first, to know in advance what they may expect from them and in this way being able to plan how to behave more effectively.[3]
According to Erickson et al., social translucent systems should respect the principles of visibility (making significant social information available to users), awareness (bringing our social rules to guide our actions based on external social cues) and accountability (being able to identify who did what and when) to allow people to effectively facilitate users' communication and collaboration in virtual environments.[4]Zolyomi et al. proposed the principle of identity as a fourth dimension for social translucence by arguing that the design of socio-technical systems should have a rich description of who is visible, to give people control over disclosure and mechanisms to advocate for their needs.[5]McDonald et al. proposed a system architecture for structuring the development of social translucent systems, which comprises two dimensions: types of user actions in the system, and a second describing the processing and interpretation done by the system. This framework can guide designers to determine what activities are important to social translucence and need to be reflected, and how interpretive levels of those actions might provide contextual salience to the users[1]
In the same way that in the real-world, providing social cues in virtual communities can help people to understand better the situations they face in these environments, to alleviate their decision-making processes by enabling their access to more informed choices, to persuade them to participate in the activities that take place there, and to structure their own schedule of individual and group activities more efficiently.[6]
In this frame of reference, an approach called "social context displays" has been proposed for showing social information -either from real or virtual environments- in digital scenarios. It is based on the use of graphical representations to visualize the presence and activity traces of a group of people, thus providing users with a third-party view of what is happening within the community i.e. who are actively participating, who are not contributing to the group efforts, etc. This social-context-revealing approach has been studied in different scenarios (e.g. IBM video-conference software, large community displaying social activity traces in a shared space called NOMATIC*VIZ), and it has been demonstrated that its application can provide users with several benefits, like providing them with more information to make better decisions and motivating them to take an active attitude towards the management of their self and group representations within the display through their actions in the real-life.[6]
The feeling of personal accountability in front of others that social translucence can report to users can be used for the design of systems for supporting behavior change (e.g. weight loss, smoking cessation), if combined with the appropriate type of feedback.[7]
By making the traces of activity of users publicly available for others to access it is natural that it can raise users concerns related to which are their rights over the data they generate, who are the final users that can have access to their information and how they can know and control their privacy policies.[6]There are several perspectives that try to contextualize this privacy issue. One perspective is to see privacy as a tradeoff between the degree of invasion to the personal space and the number of benefits that the user could perceive from the social system by disclosing their online activity traces.[8]Another perspective is examining the concession between the visibility of people within the social system and their level of privacy, which can be managed at an individual or at a group level by establishing specific permissions for allowing others to have access to their information. Other authors state that instead of enforcing users to set and control privacy settings, social systems might focus on raising their awareness about who their audiences are so they can manage their online behavior according to the reactions they expect from those different user groups.[6]
|
https://en.wikipedia.org/wiki/Social_translucence
|
Algorithmic radicalizationis the concept thatrecommender algorithmson popular social media sites such asYouTubeandFacebookdrive users toward progressively more extreme content over time, leading to them developingradicalizedextremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Throughecho chamberchannels, the consumer is driven to be morepolarizedthrough preferences in media and self-confirmation.[1][2][3][4][5]
Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels.[6][7]To what extent recommender algorithms are actually responsible for radicalization remains disputed; studies have found contradictory results as to whether algorithms have promoted extremist content.
Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling, known as afilter bubble.[8]An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system.[9]Echo chambers spread information without any opposing beliefs and can possibly lead toconfirmation bias. According togroup polarizationtheory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions.[10]According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates."[11]
On May 14, 2022, 18-year oldPayton S. Gendroncarried out amass-shootinginBuffalo, New York. The shooter stated in hismanifestothat the internet was the source of his radical beliefs: "There was little to no influence on my personal beliefs by people I met in person."[12]
Around March 19, 2024, a New York state judge ruledRedditandYouTubemust face lawsuits in connection with the mass shooting over accusations that they played a role in the radicalization of the shooter.[13]
Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content.[14]Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study.[15]It is also relatively common for Facebook to assign political labels to their users. In recent years,[when?]Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known asThe Facebook Fileshas revealed that their AI system prioritizesuser engagementover everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle.[16]
In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral",[17][18]concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity.[19]As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm."[17]According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories."[20]
YouTubehas been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content.[21]According to a 2022 study by theMozilla Foundation, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc.[22][21]
YouTube has been identified as an influential platform for spreading radicalized content.Al-Qaedaand similarextremist groupshave been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by theAmerican Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process".[23]The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February 2023, in the case ofGonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommendingISISvideos to users.Section 230is known to generally protect online platforms from civil liability for the content posted by its users.[24]
Multiple studies have found little to no evidence to suggest that YouTube's algorithms direct attention towards far-right content to those not already engaged with it.[25][26][27]
TikTokis an app that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app.[28]Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm.[29]
Various extremist groups, includingjihadistorganizations, have utilized TikTok to disseminate propaganda, recruit followers, and incite violence. The platform's algorithm, which recommends content based on user engagement, can expose users to extremist content that aligns with their interests or interactions.[30]
As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April – June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation.[31]
Studies have noted instances where individuals were radicalized through content encountered on TikTok. For example, in early 2023, Austrian authorities thwarted a plot against an LGBTQ+pride paradethat involved two teenagers and a 20-year-old who were inspired by jihadist content on TikTok. The youngest suspect, 14 years old, had been exposed to videos created by Islamist influencers glorifying jihad. These videos led him to further engagement with similar content, eventually resulting in his involvement in planning an attack.[30]
Another case involved the arrest of several teenagers inVienna, Austria, in 2024, who were planning to carry out a terrorist attack at aTaylor Swiftconcert. The investigation revealed that some of the suspects had been radicalized online, with TikTok being one of the platforms used to disseminate extremist content that influenced their beliefs and actions.[30]
The U.S. Department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization".[32]Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization.[33]Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists.[34]These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs.[35]
The Social Dilemmais a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to provepsychological manipulationon social media, therefore leading topolitical manipulation. In the film, Ben falls deeper into asocial media addictionas the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video.
In theCommunications Decency Act,Section 230states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".[36]Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user.[36]However, critics argue that this approach reduces a company's incentive to remove harmful content or misinformation, and this loophole has allowed social media companies to maximize profits through pushing radical content without legal risks.[37]This claim has itself been criticized by proponents of Section 230, as prior to its passing, courts had ruled inStratton Oakmont, Inc. v. Prodigy Services Co.that moderation in any capacity introduces a liability to content providers as "publishers" of the content they chose to leave up.[38]
Lawmakers have drafted legislation that would weaken or remove Section 230 protections over algorithmic content.House DemocratsAnna Eshoo,Frank Pallone Jr.,Mike Doyle, andJan Schakowskyintroduced the "Justice Against Malicious Algorithms Act" in October 2021 asH.R. 5596. The bill died in committee,[39]but it would have removed Section 230 protections for service providers related to personalizedrecommendation algorithmsthat present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury.[40]
|
https://en.wikipedia.org/wiki/Algorithmic_radicalization
|
ACM Conference on Recommender Systems(ACM RecSys) is an A-ranked[1]peer-reviewed academic conference series aboutrecommender systems. It is held annually in different locations,[2]and organized by different organizers, but a Steering Committee[3]supervises the organization. The conference proceedings are published by theAssociation for Computing Machinery.[4]Acceptance rates for full papers are typically below 20%.[5]This conference series focuses on issues such asalgorithms,machine learning,human-computer interaction, anddata sciencefrom amulti-disciplinaryperspective. The conference community includescomputer scientists,statisticians,social scientists,psychologists, and others.
The conference is sponsored every year by ten to 20Big Techcompanies such asAmazon,Netflix,Meta,Nvidia,Microsoft,Google, andSpotify.[6]
While anacademic conference, RecSys attracts many practitioners and industry researchers, with industry attendance making up the majority of attendees,[7]this is also reflected in the authorship of research papers.[8]Many works published at the conference have direct impact on recommendation and personalization practice in industry[9][10][11]affecting millions of users.
Recommender systems are pervasive in online systems, the conference provides opportunities for researchers and practitioners to address specific problems in various workshops in conjunction with the conference, topics include responsible recommendation,[12]causal reasoning,[13]and others. The workshop themes follow recent developments in the broader machine learning and human-computer interaction topics.
The conference is the host of the ACMRecSys Challenge, a yearly competition in the spirit of theNetflix Prizefocussing on a specific recommendation problem. The Challenge has been organized by companies such asTwitter,[14]andSpotify.[15]Participation in the challenge is open to everyone and participation in it has become a means of showcasing ones skills in recommendations,[16][17]similar toKagglecompetitions.
TheNetflix Prizewas a recommendation challenge organized byNetflixbetween 2006 and 2009. Shortly prior to ACM RecSys 2009, the winners of the Netflix Prize were announced.[18][19]At the 2009 conference, members of the winning team (Bellkor's Pragmatich Chaos) as well as representatives from Netflix convened in a panel on the lessons learnt from the Netflix Prize[20]
In 2022, at one of the workshops at the conference, a paper fromByteDance,[21]the company behindTikTok, described in detail how a recommendation algorithm for video worked.
While the paper did not point out the algorithm as the one that generates TikTok's recommendations, the paper received significant attention in technology-focused media.[22][23][24][25]
Past and future RecSys conferences include:
The ACM Recommender Systems Conference (RecSys) has experienced significant growth since its first event in 2007.[2][26]The number of paper submissions has steadily increased over the years. From an initial 35 submissions in 2007, the conference has seen over 250 submissions annually in recent years. While the number of submissions has increased, the conference's acceptance rate has become more selective, declining from 46% in its inaugural year to a range of 17-24% in more recent editions.
This article about a computer conference is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/ACM_Conference_on_Recommender_Systems
|
Cold startis a potential problem incomputer-basedinformation systemswhich involves a degree of automateddata modelling. Specifically, it concerns the issue that the system cannot draw anyinferencesforusersor items about which it has not yet gathered sufficient information.
The cold start problem is a well known and well researched problem forrecommender systems. Recommender systems form a specific type ofinformation filtering(IF) technique that attempts to present information items (e-commerce,films,music,books,news,images,web pages) that are likely of interest to the user. Typically, a recommender system compares the user's profile to some reference characteristics. These characteristics may be related to item characteristics (content-based filtering) or the user's social environment and past behavior (collaborative filtering).
Depending on the system, the user can be associated to various kinds of interactions: ratings, bookmarks, purchases, likes, number of page visits etc.
There are three cases of cold start:[1]
The new community problem, or systemic bootstrapping, refers to the startup of the system, when virtually no information the recommender can rely upon is present.[2]This case presents the disadvantages of both the New user and the New item case, as all items and users are new.
Due to this some of the techniques developed to deal with those two cases are not applicable to the system bootstrapping.
The item cold-start problem refers to when items added to the catalogue have either none or very little interactions. This constitutes a problem mainly forcollaborative filteringalgorithms due to the fact that they rely on the item's interactions to make recommendations. If no interactions are available then a pure collaborative algorithm cannot recommend the item. In case only a few interactions are available, although a collaborative algorithm will be able to recommend it, the quality of those recommendations will be poor.[3]This raises another issue, which is not anymore related to new items, but rather tounpopular items.
In some cases (e.g. movie recommendations) it might happen that a handful of items receive an extremely high number of interactions, while most of the items only receive a fraction of them. This is referred to aspopularity bias.[4]
In the context of cold-start items the popularity bias is important because it might happen that many items, even if they have been in the catalogue for months, received only a few interactions. This creates a negative loop in which unpopular items will be poorly recommended, therefore will receive much less visibility than popular ones, and will struggle to receive interactions.[5]While it is expected that some items will be less popular than others, this issue specifically refers to the fact that the recommender has not enough collaborative information to recommend them in a meaningful and reliable way.[6]
Content-based filteringalgorithms, on the other hand, are in theory much less prone to the new item problem. Since content based recommenders choose which items to recommend based on the feature the items possess, even if no interaction for a new item exist, still its features will allow for a recommendation to be made.[7]This of course assumes that a new item will be already described by its attributes, which is not always the case. Consider the case of so-callededitorialfeatures (e.g. director, cast, title, year), those are always known when the item, in this case movie, is added to the catalogue. However, other kinds of attributes might not be e.g. features extracted from user reviews and tags.[8]Content-based algorithms relying on user provided features suffer from the cold-start item problem as well, since for new items if no (or very few) interactions exist, also no (or very few) user reviews and tags will be available.
The new user case refers to when a new user enrolls in the system and for a certain period of time the recommender has to provide recommendation without relying on the user's past interactions, since none has occurred yet.[1]This problem is of particular importance when the recommender is part of the service offered to users, since a user who is faced with recommendations of poor quality might soon decide to stop using the system before providing enough interaction to allow the recommender to understand his/her interests.
The main strategy in dealing with new users is to ask them to provide some preferences to build an initial user profile. A threshold has to be found between the length of the user registration process, which if too long might induce too many users to abandon it, and the amount of initial data required for the recommender to work properly.[2]
Similarly to the new items case, not all recommender algorithms are affected in the same way.Item-item recommenderswill be affected as they rely on user profile to weight how relevant other user's preferences are.Collaborative filteringalgorithms are the most affected as without interactions no inference can be made about the user's preferences.User-user recommenderalgorithms[9]behave slightly differently. A user-user content based algorithm will rely on user's features (e.g. age, gender, country) to find similar users and recommend the items they interacted with in a positive way, therefore being robust to the new user case. Note that all these information is acquired during the registration process, either by asking the user to input the data himself, or by leveraging data already available e.g. in his social media accounts.[10]
Due to the high number of recommender algorithms available as well as system type and characteristics, many strategies to mitigate the cold-start problem have been developed. The main approach is to rely on hybrid recommenders, in order to mitigate the disadvantages of one category or model by combining it with another.[11][12][13]
All three categories of cold-start (new community, new item, and new user) have in common the lack of user interactions and presents some commonalities in the strategies available to address them.
A common strategy when dealing with new items is to couple acollaborative filteringrecommender, for warm items, with acontent-based filteringrecommender, for cold-items. While the two algorithms can be combined in different ways, the main drawback of this method is related to the poor recommendation quality often exhibited by content-based recommenders in scenarios where it is difficult to provide a comprehensive description of the item characteristics.[14]In case of new users, if no demographic feature is present or their quality is too poor, a common strategy is to offer them non-personalized recommendations. This means that they could be recommended simply the most popular items either globally or for their specific geographical region or language.
One of the available options when dealing with cold users or items is to rapidly acquire some preference data. There are various ways to do that depending on the amount of information required. These techniques are calledpreference elicitationstrategies.[15][16]This may be done either explicitly (by querying the user) or implicitly (by observing the user's behaviour). In both cases, the cold start problem would imply that the user has to dedicate an amount of effort using the system in its 'dumb' state – contributing to the construction of their user profile – before the system can start providing any intelligent recommendations.[17]
For exampleMovieLens, a web-basedrecommender systemfor movies, asks the user to rate some movies as a part of the registration.
While preference elicitation strategy are a simple and effective way to deal with new users, the additional requirements during the registration will make the process more time-consuming for the user. Moreover, the quality of the obtained preferences might not be ideal as the user could rate items they had seen months or years ago or the provided ratings could be almost random if the user provided them without paying attention just to complete the registration quickly.
The construction of the user's profile may also be automated by integrating information from other user activities, such as browsing histories or social media platforms. If, for example, a user has been reading information about a particularmusic artistfrom a media portal, then the associated recommender system would automatically propose that artist's releases when the user visits the music store.[18]
A variation of the previous approach is to automatically assign ratings to new items, based on the ratings assigned by the community to other similar items. Item similarity would be determined according to the items' content-based characteristics.[17]
It is also possible to create initial profile of a user based on thepersonalitycharacteristics of the user and use such profile to generate personalized recommendation.[19][20]Personalitycharacteristics of the user can be identified using a personality model such asfive factor model(FFM).
Another of the possible techniques is to applyactive learning (machine learning). The main goal of active learning is to guide the user in the preference elicitation process in order to ask him to rate only the items that for the recommender point of view will be the most informative ones. This is done by analysing the available data and estimating the usefulness of the data points (e.g., ratings, interactions).[21]As an example, say that we want to build two clusters from a certain cloud of points. As soon as we have identified two points each belonging to a different cluster, which is the next most informative point? If we take a point close to one we already know we can expect that it will likely belong to the same cluster. If we choose a point which is in between the two clusters, knowing which cluster it belongs to will help us in finding where the boundary is, allowing to classify many other points with just a few observations.
The cold start problem is also exhibited byinterfaceagents. Since such an agent typically learns the user's preferences implicitly by observing patterns in the user's behaviour – "watching over the shoulder" – it would take time before the agent may perform any adaptations personalised to the user. Even then, its assistance would be limited to activities which it has formerly observed the user engaging in.[22]The cold start problem may be overcome by introducing an element of collaboration amongst agents assisting various users. This way, novel situations may be handled by requesting other agents to share what they have already learnt from their respective users.[22]
In recent years more advanced strategies have been proposed, they all rely on machine learning and attempt to merge the content and collaborative information in a single model.
One example of this approaches is calledattribute to feature mapping[23]which is tailored tomatrix factorizationalgorithms.[24]The basic idea is the following. A matrix factorization model represents the user-item interactions as the product of two rectangular matrices whose content is learned using the known interactions via machine learning. Each user will be associated to a row of the first matrix and each item with a column of the second matrix. The row or column associated to a specific user or item is calledlatent factors.[25]When a new item is added it has no associated latent factors and the lack of interactions does not allow to learn them, as it was done with other items. If each item is associated to some features (e.g. author, year, publisher, actors) it is possible to define an embedding function, which given the item features estimates the corresponding item latent factors. The embedding function can be designed in many ways and it is trained with the data already available from warm items. Alternatively, one could apply a group-specific method.[26][27]A group-specific method further decomposes each latent factor into two additive parts: One part corresponds to each item (and/or each user), while the other part is shared among items within each item group (e.g., a group of movies could be movies of the same genre). Then once a new item arrives, we can assign a group label to it, and approximates its latent factor by the group-specific part (of the corresponding item group). Therefore, although the individual part of the new item is not available, the group-specific part provides an immediate and effective solution. The same applies for a new user, as if some information is available for them (e.g. age, nationality, gender) then his/her latent factors can be estimated via an embedding function or a group-specific latent factor.
Another recent approach which bears similarities with feature mapping is building a hybridcontent-based filteringrecommender in which features, either of the items or of the users, are weighted according to the user's perception of importance. In order to identify a movie that the user could like, different attributes (e.g. which are the actors, director, country, title) will have different importance. As an example consider theJames Bondmovie series, the main actor changed many times during the years, while some did not, likeLois Maxwell. Therefore, her presence will probably be a better identifier of that kind of movie than the presence of one of the various main actors.[14][28]Although various techniques exist to apply feature weighting to user or item features inrecommender systems, most of them are from theinformation retrievaldomain liketf–idf,Okapi BM25, only a few have been developed specifically for recommenders.[29]
Hybrid feature weighting techniques in particular are tailored for the recommender system domain. Some of them learn feature weight by exploiting directly the user's interactions with items, like FBSM.[28]Others rely on an intermediate collaborative model trained on warm items and attempt to learn the content feature weights which will better approximate the collaborative model.[14]
Many of the hybrid methods can be considered special cases offactorization machines.[30][31]
The above methods rely on affiliated information from users or items. Recently, another approach mitigates the cold start problem by assigning lower constraints to the latent factors associated with the items or users that reveal more information (i.e., popular items and active users), and set higher constraints to the others (i.e., less popular items and inactive users).[32]It is shown that various recommendation models benefit from this strategy.[33]Differentiating regularization weights can be integrated with the other cold start mitigating strategies.
|
https://en.wikipedia.org/wiki/Cold_start_(recommender_systems)
|
Configurators, also known as choice boards, design systems, toolkits, or co-design platforms, are responsible for guiding the user[who?]through the configuration[clarification needed]process. Different variations are represented, visualized, assessed and priced which starts a learning-by-doing process for the user. While the term “configurator” or “configuration system” is quoted rather often in literature,[citation needed]it is used for the most part in a technical sense, addressing a software tool. The success of such an interaction system is, however, not only defined by its technological capabilities, but also by its integration in the whole sale environment, its ability to allow for learning by doing, to provide experience and process satisfaction, and its integration into the brand concept. (Franke & Piller (2003))
Configurators can be found in various forms and different industries (Felfernig et al. (2014)). They are employed in B2B (business to business), as well as B2C (business to consumer) markets and are operated either by trained staff or customers themselves. Whereas B2B configurators are primarily used to support sales and lift production efficiency, B2C configurators are often employed as design tools that allow customers to "co-design" their own products. This is reflected in different advantages according to usage:[1]
For B2B:
For B2C:
Configurators enable mass customization, which depends on a deep and efficient integration of customers into value creation. Salvador et al. identified three fundamental capabilities determining the ability of a company to mass-customize its offering, i.e. solution space development, robust process design and choice navigation (Salvador, Martin & Piller (2009)). Configurators serve as an important tool for choice navigation. Configurators have been widely used in e-Commerce. Examples can be found in different industries like accessories, apparel, automobile, food, industrial goods etc. The main challenge of choice navigation lies in the ability to support customers in identifying their own solutions while minimizing complexity and the burden of choice, i.e. improving the experience of customer needs, elicitation and interaction in a configuration process. Many efforts have been put along this direction to enhance the efficiency of configurator design, such as adaptive configurators(Wang & Tseng (2011);Jalali & Leake (2012)). The prediction is integrated into the configurator to improve the quality and speed of configuration process. Configurators may also be used to limit or eliminate mass customization if intended to do so. This is accomplished through limiting of allowable options in data models.
According to (Sabin & Weigel (1998)), configurators can be classified as rule based, model based and case based, depending on the reasoning techniques used.
|
https://en.wikipedia.org/wiki/Configurator
|
Aninformation filtering systemis a system that removesredundantor unwantedinformationfrom an information stream using (semi)automated or computerized methods prior to presentation to a human user. Its main goal is the management of theinformation overloadand increment of thesemanticsignal-to-noise ratio. To do this the user's profile is compared to some reference characteristics. These characteristics may originate from the information item (the content-based approach) or the user's social environment (thecollaborative filteringapproach).
Whereas ininformation transmissionsignal processing filtersare used againstsyntax-disrupting noise on the bit-level, the methods employed in information filtering act on the semantic level.
The range of machine methods employed builds on the same principles as those forinformation extraction. A notable application can be found in the field of emailspam filters. Thus, it is not only theinformation explosionthat necessitates some form of filters, but also inadvertently or maliciously introducedpseudo-information.
On the presentation level, information filtering takes the form of user-preferences-basednewsfeeds, etc.
Recommender systemsandcontent discovery platformsare active information filtering systems that attempt to present to the user information items (film,television,music,books,news,web pages) the user is interested in. These systems add information items to the information flowing towards the user, as opposed to removing information items from the information flow towards the user. Recommender systems typically usecollaborative filteringapproaches or a combination of the collaborative filtering and content-based filtering approaches, although content-based recommender systems do exist.
Before the advent of theInternet, there are already several methods offiltering information; for instance, governments may control and restrict the flow of information in a given country by means of formal or informal censorship.
On the other hand, we are going to talk about information filters if we refer to newspaper editors and journalists when they provide a service that selects the most valuable information for their clients, readers of books, magazines, newspapers,radiolisteners andTVviewers. This filtering operation is also present in schools and universities where there is a selection of information to provide assistance based on academic criteria to customers of this service, the students. With the advent of the Internet it is possible that anyone can publish anything he wishes at a low-cost. In this way, it increases considerably the less useful information and consequently the quality information is disseminated. With this problem, it began to devise new filtering with which we can get the information required for each specific topic to easily and efficiently.
A filtering system of this style consists of several tools that help people find the most valuable information, so the limited time you can dedicate to read / listen / view, is correctly directed to the most interesting and valuable documents. These filters are also used to organize and structure information in a correct and understandable way, in addition to group messages on the mail addressed. These filters are essential in the results obtained of thesearch engineson the Internet. The functions of filtering improves every day to get downloading Web documents and more efficient messages.
One of the criteria used in this step is whether theknowledgeis harmful or not, whether knowledge allows a better understanding with or without the concept. In this case the task ofinformation filteringto reduce or eliminate the harmful information with knowledge.
A system of learning content consists, in general rules, mainly of three basic stages:
Currently the problem is not finding the best way tofilter information, but the way that these systems require to learn independently the information needs of users. Not only because they automate the process offilteringbut also the construction and adaptation of the filter. Some branches based on it, such as statistics, machine learning, pattern recognition and data mining, are the base for developing information filters that appear and adapt in base to experience. To carry out the learning process, part of the information has to be pre-filtered, which means there are positive and negative examples which we named training data, which can be generated by experts, or viafeedbackfrom ordinary users.
As data is entered, the system includes new rules; if we consider that this data can generalize the training data information, then we have to evaluate the system development and measure the system's ability to correctly predict the categories of newinformation. This step is simplified by separating the training data in a new series called "test data" that we will use to measure the error rate. As a general rule it is important to distinguish between types of errors (false positives and false negatives). For example, in the case on an aggregator of content for children, it doesn't have the same gravity to allow the passage of information not suitable for them, that shows violence or pornography, than the mistake to discard some appropriated information.
To improve the system to lower error rates and have these systems with learning capabilities similar to humans we require development of systems that simulate human cognitive abilities, such asnatural-language understanding, capturing meaning Common and other forms of advanced processing to achieve the semantics of information.
Nowadays, there are numerous techniques to develop information filters, some of these reach error rates lower than 10% in various experiments.[citation needed]Among these techniques there are decision trees, support vector machines, neural networks, Bayesian networks, linear discriminants, logistic regression, etc..
At present, these techniques are used in different applications, not only in the web context, but in thematic issues as varied as voice recognition, classification of telescopic astronomy or evaluation of financial risk.
|
https://en.wikipedia.org/wiki/Information_filtering_system
|
Theinformation explosionis the rapid increase in the amount ofpublishedinformationordataand the effects of this abundance.[1]As the amount of available data grows, the problem ofmanaging the informationbecomes more difficult, which can lead toinformation overload. The Online Oxford English Dictionary indicates use of the phrase in a March 1964New Statesmanarticle.[2]The New York Timesfirst used the phrase in its editorial content in an article by Walter Sullivan on June 7, 1964, in which he described the phrase as "much discussed". (p11.)[3]The earliest known use of the phrase was in a speech about television by NBC presidentPat Weaverat the Institute of Practitioners of Advertising in London on September 27, 1955. The speech was rebroadcast on radio stationWSUIinIowa Cityand excerpted in theDaily Iowannewspaper two months later.[4]
Many sectors are seeing this rapid increase in the amount of information available such as healthcare, supermarkets, and governments.[5]Another sector that is being affected by this phenomenon is journalism. Such a profession, which in the past was responsible for the dissemination of information, may be suppressed by the overabundance of information today.[6]
Techniques to gather knowledge from an overabundance of electronic information (e.g.,data fusionmay help indata mining) have existed since the 1970s. Another common technique to deal with such amount of information isqualitative research.[7]Such approaches aim to organize the information, synthesizing, categorizing and systematizing in order to be more usable and easier to search.
A new metric that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (wheremegabytesis 106bytesand is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a year divided by theworld populationin that year. The GDSP metric is a crude measure of how much disk storage could possibly be used to collect person-specific data on the world population.[5]In 1983, one million fixed drives with an estimated total of 90terabyteswere sold worldwide; 30MB drives had the largest market segment.[9]In 1996, 105 million drives, totaling 160,623 terabytes were sold with 1 and 2gigabytedrives leading the industry.[10]By the year 2000, with 20GB drive leading the industry, rigid drives sold for the year are projected to total 2,829,288 terabytes Rigid disk drive sales to top $34 billion in 1997.
According toLatanya Sweeney, there are three trends in data gathering today:
Type 1.Expansion of the number of fields being collected, known as the “collect more” trend.
Type 2.Replace an existing aggregate data collection with a person-specific one, known as the “collect specifically” trend.
Type 3.Gather information by starting a new person-specific data collection, known as the “collect it if you can” trend.[5]
Since "information" in electronic media is often used synonymously with "data", the terminformation explosionis closely related to the concept ofdata flood(also dubbeddata deluge). Sometimes the terminformation floodis used as well. All of those basically boil down to the ever-increasing amount ofelectronic dataexchanged per time unit. A term that covers the potential negative effects of information explosion isinformation inflation.[11]The awareness about non-manageable amounts of data grew along with the advent of ever more powerful data processing since the mid-1960s.[12]
Even though the abundance of information can be beneficial in several levels, some problems may be of concern such asprivacy, legal and ethical guidelines, filtering and data accuracy.[13]Filtering refers to finding useful information in the middle of so much data, which relates to the job of data scientists. A typical example of a necessity of data filtering (data mining) is in healthcare since in the next years is due to have EHRs (Electronic Health Records) of patients available. With so much information available, the doctors will need to be able to identify patterns and select important data for the diagnosis of the patient.[13]On the other hand, according to some experts, having so much public data available makes it difficult to provide data that is actually anonymous.[5]Another point to take into account is the legal and ethical guidelines, which relates to who will be the owner of the data and how frequently he/she is obliged to the release this and for how long.[13]With so many sources of data, another problem will be accuracy of such. An untrusted source may be challenged by others, by ordering a new set of data, causing a repetition in the information.[13]According to Edward Huth, another concern is the accessibility and cost of such information.[14]The accessibility rate could be improved by either reducing the costs or increasing the utility of the information. The reduction of costs according to the author, could be done by associations, which should assess which information was relevant and gather it in a more organized fashion.
As of August 2005, there were over 70 millionweb servers.[15]As of September 2007[update]there were over 135 million web servers.[16]
According toTechnorati, the number ofblogsdoubles about every 6 months with a total of 35.3 million blogs as of April 2006[ref].[17]This is an example of the early stages oflogistic growth, where growth is approximatelyexponential, since blogs are a recent innovation. As the number of blogs approaches the number of possible producers (humans), saturation occurs, growth declines, and the number of blogs eventually stabilizes.
|
https://en.wikipedia.org/wiki/Information_explosion
|
Amedia monitoring service, apress clipping service,clipping service, or aclipping bureau, as known in earlier times, provides clients with copies of media content, which is of specific interest to them and subject to changing demand; what they provide may include documentation, content, analysis, or editorial opinion, specifically or widely.
These services tend to specialize their coverage by subject, industry, size, geography, publication, journalist, or editor. The printed sources, which could be readily monitored, greatly expanded with the advent oftelegraphyandsubmarine cablesin the mid- to late-19th century; the various types of media now available proliferated in the 20th century, with the development ofradio,television, thephotocopierand theWorld Wide Web. Though media monitoring is generally used for capturing content or editorial opinion, it also may be used to capture advertising content.
Media monitoring services have been variously termed over time, as new players entered the market, new forms of media were created, and as new uses from available content developed. Alternative terms for these monitoring services includeinformation logistics,media intelligence, and media information services.
Sincemass mediatraditionally was limited solely to print media, naturally the monitoring was also limited to these media. The first press clipping agency inLondonwas established in 1852 by Henry Romeike, partnering with newsdealer Curtice.[1]An agency named "L'Argus de la presse" was established in Paris in 1879 by Alfred Cherie, who offered a press-clipping service to Parisian actors, enabling them to buy reviews of their work rather than purchasing the whole newspaper.[2]
The National Press Intelligence Company began in New York in 1885. More than a dozen clipping services were in operation by 1899. The services opening up across the United States formed a cooperative network to increase their range.[3]By 1932, the Romeike company andLuce's Press Clipping Bureaushared 80% of the clipping business in the United States.[1]
Initially, press clipping services primarily served "vanity" purposes: actors, tycoons, and socialites eager to read what newspapers had written about them. By the 1930s, the bulk of the clipping subscriptions were forbig business.[1]Government agencies have been subscribers, as have other newspapers.[3][4]
Early clipping services employed women to scan periodicals for mentions of specific names or terms. The marked periodicals were then cut out by men and pasted to dated slips. Women would then sort those slips and clippings to be sent to the services' clients.[1]
Asradioand latertelevisionbroadcasting were introduced in the 20th century, press clipping agencies began to expand their services into the monitoring of these broadcast media, and this task was greatly facilitated by the development of commercial audio and video tape recording systems in the 1950s and 1960s.[citation needed]
With the growth of theInternetin the 1990s, media monitoring service extended their services to the monitoring of online information sources using new digital search and scan technologies to provide output of interest to their clients. For example, Universal Press Clipping Bureau, which began in 1908 in Omaha, Nebraska, changed its name in the 1990s to Universal Information Services as it expanded into digital technology.[4]In 1998, the now-defunct WebClipping website began monitoring Internet-based news media.[5]By 2012,Gartnerestimated that there were more than 250 social media monitoring vendors.[6]
From a cut-and-clip service, media clipping today has expanded to incorporate technology with information. The idea behind clipping services, that information could be isolated from its original publication, influenced the interfaces of digital news sources such asLexisNexis, enabling users to search by keywords.[3]Online tools such asGoogle Alertsnotify services and individual users of results for specific terms and names.[6]
Service delivery happens at three fronts. Clients may get their original hard copy clips through traditional means (mail/overnight delivery) or may opt for digital delivery. Digital delivery allows the end user to receive via email all the relevant news of the company, competition and industry daily, with updates as they break. The same news may also be indexed (as allowed by copyright laws) in a searchable database to be accessed by subscribers. Another option of this service is auto-analysis, wherein the data can be viewed and compared in different formats.
Every organization that uses PR invariably uses news monitoring as well. In addition to tracking their own publicity, self-generated or otherwise, news monitoring clients also use the service to track competition or industry specific trends or legislation, to build a contact base of reporters, experts, leaders for future reference, to audit the effectiveness of theirPR campaigns, to verify that PR,marketingand sales messages are in sync, and to measure impact on their target market. City, State, and Federal agencies use news monitoring services to stay informed in regions they otherwise would not be able to monitor themselves and to verify that the public information disseminated is accurate, accessible in multiple formats and available to the public. Some monitoring services specialize in one or more areas of press clipping, TV and radio monitoring, orinternet tracking. Media analysis is also offered by most news monitoring services.
Television news monitoring companies, especially in the United States, capture and indexclosed captioningtext and search it for client references. Some TV monitoring companies employ human monitors who review and abstract program content; other services rely on automated search programs to search and index stories.
Online media monitoring services utilize automated software calledspidersor robots (bots) to automatically monitor the content of free online news sources including newspapers, magazines, trade journals, TV station and news syndication services. Online services generally provide links but may also provide text versions of the articles. Results may or may not be verified for accuracy by the online monitoring service. Most newspapers do not include all of their print content online and some have web content that does not appear in print.
In the United States, there are trade associations formed to share best practices which include the North American Conference of Press Clipping Services and the International Association of Broadcast Monitors.
Two parallel cases developed in 2012,one in the United States, andone in the United Kingdom. In each case, the legality of temporary copies and the online media monitoring service offered to clients, was in dispute. Essentially the two cases covered the same issue (media clippings shown to clients online) and with the same defendant,Meltwater Group. The plaintiff differed, beinga UK copyright collection society(UK) rather thanAssociated Press(US), but upon parallel grounds.
The activity was ruled unlawful in the US (under the "fair use" doctrine). In the UK under UK and EU copyright law, service providers need a licence. Users are also licensed. If users only viewed the original source without getting a headline or snippet or printing the article this is not an infringement, and temporary copies to enable a lawful purpose are themselves lawful, but in practice services for business do not work this way.
|
https://en.wikipedia.org/wiki/Media_monitoring_service
|
Personalized marketing,also known asone-to-one marketingorindividual marketing,[1]is amarketingstrategy by which companies usedata analysisand digital technology to show adverts to individuals based on their perceived characteristics and interests. Marketers use methods fromdata collection,analytics,digital electronics, anddigital economicsthen use technology to analyze it and show personalized ads based on algorithms that attempt to deduce people’s interests.
Personalized marketing is dependent on many different types of technology fordata collection,data classification,data analysis,data transfer, and datascalability. Technology enablesmarketingprofessionals to collect first-party data such as gender, age group, location, and income, as well as connect them with third-party data such asclick-through ratesof onlinebanner adsandsocial mediaparticipation.
Data Management Platforms: A data management platform[2](DMP) is acentralized computingsystem for collecting, integrating and managing large sets ofstructuredandunstructured datafrom disparate sources. Personalized marketing enabled by DMPs, is sold to advertisers with the goal of having consumers receive relevant, timely, engaging, and personalized messaging andadvertisementsthat resonate with their unique needs and wants.[2]Growing number of DMP software options are available includingAdobe SystemsAudience Manager and Core Audience (Marketing Cloud) toOracle-acquiredBlueKai,SitecoreExperience Platform and X+1[3]Customer Relationship Management Platforms:Customer relationship management(CRM) is used by companies to manage and analyze customer interactions and data throughout the customer lifecycle, improving relationships, boosting retention, and driving sales growth. CRM systems are designed to compile information on customers across different channels (points of contact between the customer and the company) which could include the company'swebsite,live support,direct mail, marketing materials and social media. CRM systems can also give customer-facing staff detailed information on customers' personal information, purchase history, buying preferences and concerns.[4]Most popular enterprise CRM applications areSalesforce.com,Microsoft Dynamics CRM,NetSuite,Hubspot, and Oracle Eloqua.
Beacon Technology: Beacon technology works onBluetooth low energy (BLE)which is used by a low frequency chip that is found in devices like mobile phones. These chips communicate with multiple Beacon devices to form a network and are used by marketers to better personalize the messaging and mobile ads based on the customer's proximity to their retail outlet.[5]Beacon technology circumference has shrunk, ultimately facilitating its use.[5]
One-to-one marketing[6]refers to marketing strategies applied directly to a specific consumer. Having knowledge of the consumer's preferences, enables suggesting specific products and promotions to each consumer. One-to-one marketing is based on four main steps in order to fulfill its goals: identify, differentiate, interact, and customize.[7]
The goal of personalized marketing includes improving thecustomer experienceby delivering customized interactions and offers, ultimately leading to increasedcustomer loyalty. By understanding individualized consumer needs, a brand can create personalized ads and products that effectively target their desired consumers, fostering satisfaction. Personalized marketing aims to create consumer satisfaction, drivingbrand loyaltyand repeat business.[8]
Personalized marketing is used by businesses to engage inpersonalized pricingwhich is a form ofprice discrimination. Personalized marketing is being adopted in one form or another by many different companies because of the benefits it brings for both the businesses and their customers.
Described below are the costs and benefits of personalized marketing for businesses and customers:
Prior to theInternet, businesses faced challenges in measuring the success of theirmarketing campaigns. A campaign would be launched, and even if there was a change in revenue, it was often challenging to determine what impact the campaign had on the change. Personalized marketing allows businesses to learn more about customers based ondemographic, contextual, andbehavioraldata. This behavioral data, as well as being able to track consumers’ habits, allows firms to better determine whatadvertising campaignsand marketing efforts are bringing customers in and what demographics they are influencing.[9]This allows firms to drop efforts that are ineffective, as well as put more money into the techniques that are bringing in customers.[10]
Some personalized marketing can also be automated, increasing the efficiency of a business's marketing strategy. For example, an automatedemailcould be sent to a user shortly after an order is placed, giving suggestions for similar items or accessories that may help the customer better use the product he or she ordered, or amobile appcould send a notification about relevant deals to a customer when he or she is close to a store.[11]
Consumers are presented with a wide range of products and services to choose from. A single retail website may offer a large variety of products, and few have the time inclination to browse through everything retailers have to offer. At the same time, customers expect ease and convenience in their shopping experience. In a recent survey, 74% of consumers said they get frustrated when websites have content, offers, ads, and promotions that have nothing to do with them. Many even expressed that they would leave a site if the marketing on the site was the opposite of their tastes, such as prompts to donate to a political party they dislike, or ads for a dating service when the visitor to the site is married. In addition, the top two reasons customers unsubscribe from marketingemailing listsare 1) they receive too many emails and 2) the content of the emails is not relevant to them.[12]
Personalized marketing helps to bridge the gap between the vastness of what is available and the needs of customers for streamlined shopping experience. By providing a customized experience for customers, frustrations of purchase choices may be avoided. Customers may be able to find what they are looking for more efficiently, reducing the time spent searching through unrelated content and products. Consumers have become accustomed to this type of user experience that caters to their interests, and companies that have created ultra-customized digital experiences, such asAmazon[13]andNetflix.[14]
Personalized marketing is gaining headway and has become a point of popular interest with the emergence of relevant and supportive technologies likeData Management Platform,geotargeting, and various forms ofsocial media. Now, it is believed to be an inevitable baseline for the future of marketing strategy and for future business success in competitive markets.
Adapt to technology:Companies must adapt to relevant technologies in order for personalized marketing to be implemented. They may need to familiarize themselves with forms of social media, data-gathering platforms, and other technologies. Companies have access tomachine learning,big dataandAIthat automate personalization processes.[15]
Restructuring current business models:Time and resources are necessary to adopt new marketing systems tailored to the most relevant technologies. Organized planning, communication and restructuring within businesses are essential to successfully implement personalized marketing. Personalized marketing prompts businesses to consider customer data and relevant outside information. Companydatabasesare filled with expansive personal information, such as individuals' geographic locations and potential buyers’ past purchases, which raises concerns about how that information is gathered, circulated internally and externally, and used to increase profits.[16]
Legal liabilities:To address concerns about sensitive information being gathered and utilized without obvious consumer consent, liabilities and legalities have to be set and enforced. To prevent anyprivacyissues, companies manage legal hurdles before personalized marketing is adopted.[17]Specifically, theEUhas passed rigid regulation, known asGDPR, that limits what kind of data marketers can collect on their users, and provide ways in which consumers can suit companies for violation of their privacy. In the US,Californiahas followed suit and passed theCCPAin 2018.[18]
Algorithmsgenerate data by analyzing and associating it with user preferences, such asbrowsing historyand personal profiles. Rather than discovering new facts or perspectives, one will be presented with similar or adjoining concepts ("filter bubble"). Some consider this exploitation of existing ideas rather than discovery of new ones.[19]Presenting someone with only personalized content may also exclude other, unrelated news or information that might in fact be useful to the user.[19]
Algorithms may also be flawed. In February 2015,Coca-Colaran into trouble over an automated, algorithm-generated bot created for advertising purposes.Gawker’s editorial labs director, Adam Pash, created aTwitterbot @MeinCoke and set it up to tweet lines fromMein Kampfand then link to them with Coca-Cola’s campaign #MakeItHappy. This resulted in Coca-Cola’s Twitter feed broadcasting big chunks ofAdolf Hitler’s text.[20]In November 2014, theNew England Patriotswere forced to apologize after an automatic, algorithm-generated bot was tricked into tweeting a racial slur from the official team account.[21]
Personalized marketing has proven most effective in interactive media, particularly on the internet. A website has the ability to track a customer's interests and make suggestions based on the collected data. Many sites help customers make choices by organizing information and prioritizing it based on the individual's liking. In some cases, the product itself can be customized using aconfiguration system.[22]
The business movement during Web 1.0 leveraged database technology for targeting products, ads, and services to specific users with particular profile attributes. The concept was supported by technologies such as BroadVision, ATG, and BEA.Amazonis a classic example of a company that performs "One to One Marketing" by offering users targeted offers and related products.[23]
The term "one-to-one marketing" refers to personalized marketing behavior towards an individual based on received data. Due to its nature, "one-to-one marketing" is often referred to as relationship marketing. This type of marketing creates a personalized relationships with individual consumers.[24]
McKinseyidentified 4 problems that prevent companies from implementing large scale personalizations:[25]
|
https://en.wikipedia.org/wiki/Personalized_marketing
|
Personalized searchis aweb searchtailored specifically to an individual's interests by incorporating information about the individual beyond the specific query provided. There are two general approaches topersonalizingsearch results, involving modifying the user's query and re-ranking search results.[1]
Googleintroduced personalized search in 2004 and it was implemented in 2005 toGoogle search. Google has personalized search implemented for all users, not only those with aGoogle account. There is not much information on how exactly Google personalizes their searches; however, it is believed that they use user language, location, andweb history.[2]
Earlysearch engines, likeGoogleandAltaVista, found results based only on key words. Personalized search, as pioneered by Google, has become far more complex with the goal to "understand exactly what you mean and give you exactly what you want."[3]Using mathematical algorithms, search engines are now able to return results based on the number of links to and from sites; the more links a site has, the higher it is placed on the page.[3]Search engines have two degrees of expertise: the shallow expert and the deep expert. An expert from the shallowest degree serves as a witness who knows some specific information on a given event. A deep expert, on the other hand, has comprehensible knowledge that gives it the capacity to deliver unique information that is relevant to each individual inquirer.[4]If a person knows what he or she wants then the search engine will act as a shallow expert and simply locate that information. But search engines are also capable of deep expertise in that they rank results indicating that those near the top are more relevant to a user's wants than those below.[4]
While many search engines take advantage of information about people in general, or about specific groups of people, personalized search depends on a user profile that is unique to the individual. Research systems that personalize search results model their users in different ways. Some rely on users explicitly specifying their interests or on demographic/cognitive characteristics.[5][6]However, user-supplied information can be difficult to collect and keep up to date. Others have built implicit user models based on content the user has read or their history of interaction with Web pages.[7][8][9][10][11]
There are several publicly available systems for personalizing Web search results (e.g.,Google Personalized SearchandBing's search result personalization[12]). However, the technical details and evaluations of these commercial systems are proprietary. One technique Google uses to personalize searches for its users is to track log in time and if the user has enabled web history in his browser. If a user accesses the same site through a search result from Google many times, it believes that they like that page. So when users carry out certain searches, Google's personalized search algorithm gives the page a boost, moving it up through the ranks. Even if a user is signed out, Google may personalize their results because it keeps a 180-day record of what a particular web browser has searched for, linked to a cookie in that browser.[13]
In search engines on social networking platforms likeFacebookorLinkedIn, personalization could be achieved by exploitinghomophilybetween searchers and results.[14]For example, in People search, searchers are often interested in people in the same social circles, industries or companies. In Job search, searchers are usually interested in jobs at similar companies, jobsat nearby locationsand jobs requiring expertise similar to their own.
In order to better understand how personalized search results are being presented to the users, a group of researchers at Northeastern University compared an aggregate set of searches from logged in users against acontrol group. The research team found that 11.7% of results show differences due to personalization; however, this varies widely bysearch queryand result ranking position.[15]Of various factors tested, the two that had measurable impact were being logged in with a Google account and theIP addressof the searching users. It should also be noted that results with high degrees of personalization include companies and politics. One of the factors driving personalization is localization of results, with company queries showing store locations relevant to the location of the user. So, for example, if a user searched for "used car sales", Google may produce results of local car dealerships in their area. On the other hand, queries with the least amount of personalization include factual queries ("what is") and health.[15]
When measuring personalization, it is important to eliminate background noise. In this context, one type of background noise is the carry-over effect. The carry-over effect can be defined as follows: when a user performs a search and follow it with a subsequent search, the results of the second search is influenced by the first search. A noteworthy point is that the top-rankedURLsare less likely to change based on personalization, with most personalization occurring at the lower ranks. This is a style of personalization based on recent search history, but it is not a consistent element of personalization because the phenomenon times out after 10 minutes, according to the researchers.[15]
Several concerns have been brought up regarding personalized search. It decreases the likelihood of finding new information bybiasing search resultstowards what the user has already found. It introduces potential privacy problems in which a user may not be aware that their search results are personalized for them, and wonder why the things that they are interested in have become so relevant. Such a problem has been coined as the "filter bubble" by authorEli Pariser. He argues that people are letting major websites drive their destiny and make decisions based on the vast amount of data they've collected on individuals. This can isolate users in their own worlds or "filter bubbles" where they only see information that they want to, such a consequence of "The Friendly World Syndrome". As a result, people are much less informed of problems in the developing world which can further widen the gap between the North (developed countries) and the South (developing countries).[16]
The methods of personalization, and how useful it is to "promote" certain results which have been showing up regularly in searches by like-minded individuals in the same community. The personalization method makes it very easy to understand how the filter bubble is created. As certain results are bumped up and viewed more by individuals, other results not favored by them are relegated to obscurity. As this happens on a community-wide level, it results in the community, consciously or not, sharing a skewed perspective of events.[17]Filter bubbles have become more frequent in search results and are envisaged as disruptions to information flow in online more specifically social media.[18]
An area of particular concern to some parts of the world is the use of personalized search as a form of control over the people utilizing the search by only giving them particular information (selective exposure). This can be used to give particular influence over highly talked about topics such as gun control or even gear people to side with a particular political regime in different countries.[16]While total control by a particular government just from personalized search is a stretch, control of the information readily available from searches can easily be controlled by the richest corporations. The biggest example of a corporation controlling the information is Google. Google is not only feeding you the information they want but they are at times using your personalized search to gear you towards their own companies or affiliates. This has led to a complete control of various parts of the web and a pushing out of their competitors such as how Google Maps took a major control over the online map and direction industry, pushing out competitors such asMapQuest.[19]
Many search engines use concept-based user profiling strategies that derive only topics that users are highly interested in but for best results, according to researchers Wai-Tin and Dik Lun, both positive and negative preferences should be considered. Such profiles, applying negative and positive preferences, result in highest quality and most relevant results by separating alike queries from unalike queries. For example, typing in 'apple' could refer to either the fruit or theMacintoshcomputer and providing both preferences aids search engines' ability to learn which apple the user is really looking for based on the links clicked. One concept-strategy the researchers came up with to improve personalized search and yield both positive and negative preferences is the click-based method. This method captures a user's interests based on which links they click on in a results list, while downgrading unclicked links.[20]
The feature also has profound effects on thesearch engine optimizationindustry, due to the fact that search results will no longer be ranked the same way for every user.[21]An example of this is found in Eli Pariser's, The Filter Bubble, where he had two friends type in "BP" into Google's search bar. One friend found information on the BP oil spill in the Gulf of Mexico while the other retrieved investment information.[16]The aspect ofinformation overloadis also prevalent when using search engine optimization. However, one means of managing information overload is through accessing value-added information—information that has been collected, processed, filtered, and personalized for each individual user in some way.[22]For instance, Google uses various ‘‘signals’’ in order to personalize searches including location, previous search keywords and recently contacts in a user’s social network while on the other hand, Facebook registers the user’s interactions with other users, the so-called ‘‘social gestures’’.[22]The social gestures in this case include things such as use likes, shares, subscribe and comments. When the user interacts with the system by consuming a set of information, the system registers the user interaction and history. On a later date, on the basis of this interaction history, some critical information is filtered out. This include content produced by some friends might be hidden from the user. This is because the user did not interact with the excluded friends over a given time. It is also essential to note that within the social gestures, photos and videos receives higher ranking than regular status posts and other related posts.[22]
The filter bubble has made a heavy effect on the search for information of health. With the influence of search results based upon search history, social network, personal preference and other aspects, misinformation has been a large contributor in the drop of vaccination rate. In 2014/15 there was an outbreak of measles in America with there being 644 reported cases during the time period. The key contributors to this outbreak were anti-vaccine organizations and public figures, who at the time were spreading fear about the vaccine.[23]
Some have noted that personalized search results not only serve to customize a user's search results, but alsoadvertisements.[citation needed]This has been criticized as aninvasion on privacy.[citation needed]
An important example of search personalization isGoogle. There are a host of Google applications, all of which can be personalized and integrated with the help of a Google account. Personalizing search does not require an account. However, one is almost deprived of a choice, since so many useful Google products are only accessible if one has a Google account. The Google Dashboard, introduced in 2009, covers more than 20 products and services, including Gmail, Calendar, Docs, YouTube, etc.[24]that keeps track of all the information directly under one's name. The free Google Custom Search is available for individuals and big companies alike, providing the Search facility for individual websites and powering corporate sites such as that of theNew York Times. The high level of personalization that was available with Google played a significant part in helping it remain the world's favorite search engine.
One example of Google's ability to personalize searches is in its use of Google News. Google has geared its news to show everyone a few similar articles that can be deemed interesting, but as soon as the user scrolls down, it can be seen that the news articles begin to differ. Google takes into account past searches as well as the location of the user to make sure that local news gets to them first. This can lead to a much easier search and less time going through all of the news to find the information one want. The concern, however, is that the very important information can be held back because it does not match the criteria that the program sets for the particular user. This can create the "filter bubble" as described earlier.[16]
An interesting point about personalization that often gets overlooked is the privacy vs personalization battle. While the two do not have to be mutually exclusive, it is often the case that as one becomes more prominent, it compromises the other. Google provides a host of services to people, and many of these services do not require information to be collected about a person to be customizable. Since there is no threat of privacy invasion with these services, the balance has been tipped to favor personalization over privacy, even when it comes to search. As people reap the rewards of convenience from customizing their other Google services, they desire better search results, even if it comes at the expense of private information. Where to draw the line between the information versus search results tradeoff is new territory and Google gets to make that decision. Until people get the power to control the information that is being collected about them, Google is not truly protecting privacy.
Google can use multiple methods of personalization such as traditional, social, geographic, IP address, browser, cookies, time of day, year, behavioral, query history, bookmarks, and more. Although having Google personalize search results based on what users searched previously may have its benefits, there are negatives that come with it.[25][26]With the power from this information, Google has chosen to enter other sectors it owned, such as videos, document sharing, shopping, maps, and many more. Google has done this by steering searchers to their own services offered as opposed to others such as MapQuest.
Using search personalization, Google has doubled its video market share to about eighty percent. The legal definition of amonopolyis when a firm gains control of seventy to eighty percent of the market. Google has reinforced this monopoly by creating significant barriers of entry such as manipulating search results to show their own services. This can be clearly seen with Google Maps being the first thing displayed in most searches.
The analytical firm Experian Hitwise stated that since 2007, MapQuest has had its traffic cut in half because of this. Other statistics from around the same time include Photobucket going from twenty percent of market share to only three percent, Myspace going from twelve percent market share to less than one percent, and ESPN from eight percent to four percent market share. In terms of images, Photobucket went from 31% in 2007 to 10% in 2010 and Yahoo Images has gone from 12% to 7%.[27]It becomes apparent that the decline of these companies has come because of Google's increase in market share from 43% in 2007 to about 55% in 2009.[27]
There are two common themes with all of these graphs. The first is that Google's market share has a direct inverse relationship to the market share of the leading competitors. The second is that this directly inverse relationship began around 2007, which is around the time that Google began to use its "Universal Search" method.[28]
Two studies examined the effects of personalized screening and ordering tools, and the results show apositive correlationbetween personalized search and the quality of consumers' decisions:
The first study was conducted by Kristin Diehl from theUniversity of South Carolina. Her research discovered that reducing search cost led to lower quality choices. The reason behind this discovery was that 'consumers make worse choices because lower search costs cause them to consider inferior options.' It also showed that if consumers have a specific goal in mind, they would further their search, resulting in an even worse decision.[29]The study by Gerald Haubl from theUniversity of Albertaand Benedict G.C. Dellaert fromMaastricht Universitymainly focused on recommendation systems. Both studies concluded that a personalized search and recommendation system significantly improved consumers' decision quality and reduced the number of products inspected.[29]
On the same note the use of the use offilter bubblesin personalized search has also led to several benefits to the users. For instance filter bubbles have the potential of enhancing opinion diversity by allowing like-minded citizens to come together and reinforce their beliefs. This also helps in protecting users from fake and extremist content by enclosing them in bubbles of reliable and verifiable information.[30]Filter bubblescan be an important element of information freedom by providing users more choice.[30]
Personalized search has also proved to work on the benefit of the user in the sense that they improve the information search results. Personalized search tailors search result to the needs of the user in the sense that it matches what the user wants with past search history.[31]This also helps reduce the amount of irrelevant information and also reduces the amount of time users spend in searching for information. For instance, inGoogle, the search history of user is kept and matched with the user query in the user's next searches. Google achieves this through three important techniques. The three techniques include (i) query reformulation using extra knowledge, i.e., expansion or refinement of a query, (ii) post filtering or re-ranking of the retrieved documents (based on the user profile or the context), and (iii) improvement of the IR model.[31]
Personalized search can improve search quality significantly and there are mainly two ways to achieve this goal:
The first model available is based on the users' historical searches and search locations. People are probably familiar with this model since they often find the results reflecting their current location and previous searches.
There is another way to personalize search results. In Bracha Shapira and Boaz Zabar's "Personalized Search: Integrating Collaboration and Social Networks", Shapira and Zabar focused on a model that utilizes arecommendation system.[32]This model shows results of other users who have searched for similar keywords. The authors examined keyword search, the recommendation system, and the recommendation system with social network working separately and compares the results in terms of search quality. The results show that a personalized search engine with the recommendation system produces better quality results than the standard search engine, and that the recommendation system with social network even improves more.
Recent paper “Search personalization with embeddings” shows that a new embedding model for search personalization, where users are embedded on a topical interest space, produces better search results than strong learning-to-rank models.
The foundation of the argument against the use of personalized search is because it limits the users' ability to become exposed to material that would be relevant to the user's search query but due to the fact that some of this material differs from the user's interests and history, the material is not displayed to the user. Search personalization takes the objectivity out of the search engine and undermines the engine. "Objectivity matters little when you know what you are looking for, but its lack is problematic when you do not".[33]Another criticism of search personalization is that it limits a core function of the web: the collection and sharing of information. Search personalization prevents users from easily accessing all the possible information that is available for a specific search query. Search personalization adds a bias to user's search queries. If a user has a particular set of interests or internet history and uses the web to research a controversial issue, the user's search results will reflect that. The user may not be shown both sides of the issue and miss potentially important information if the user's interests lean to one side or another. A study done on search personalization and its effects on search results in Google News resulted in different orders of news stories being generated by different users, even though each user entered the same search query. According to Bates, "only 12% of the searchers had the same three stories in the same order. This to me is prima facie evidence that there is filtering going on".[34]If search personalization was not active, all the results in theory should have been the same stories in an identical order.
Another disadvantage of search personalization is that internet companies such as Google are gathering and potentially selling their users' internet interests and histories to other companies. This raises a privacy issue concerning whether people are comfortable with companies gathering and selling their internet information without their consent or knowledge. Many web users are unaware of the use of search personalization and even fewer have knowledge that user data is a valuable commodity for internet companies.
E. Pariser, author ofThe Filter Bubble, explains how there are differences that search personalization has on bothFacebookand Google. Facebook implements personalization when it comes to the amount of things people share and what pages they "like". An individual'ssocial interactions, whose profile they visit the most, who they message or chat with are all indicators that are used when Facebook uses personalization. Rather than what people share being an indicator of what is filtered out, Google takes into consideration what we "click" to filter out what comes up in our searches. In addition, Facebook searches are not necessarily as private as the Google ones. Facebook draws on the more public self and users share what other people want to see. Even whiletaggingphotographs, Facebook uses personalization andface recognitionthat will automatically assign a name to face. Facebook's like button utilizes its users to do their own personalization for the website. What posts the user comments on or likes tells Facebook what type of posts they will be interested in for the future. In addition to this, it helps them predict what type of posts they will “comment on, share, or spam in the future.”[35]The predictions are combined to produce one relevancy score which helps Facebook decide what to show you and what to filter out.[35]
In terms of Google, users are provided similar websites and resources based on what they initially click on. There are even other websites that use the filter tactic to better adhere to user preferences. For example,Netflixalso judges from the users search history to suggest movies that they may be interested in for the future. There are sites likeAmazonand personalshopping sitesalso use other peoples history in order to serve their interests better.Twitteralso uses personalization by "suggesting" other people to follow. In addition, based on who one "follows", "tweets" and "retweets" at, Twitter filters out suggestions most relevant to the user.LinkedInpersonalizes search results at two levels.[14]LinkedInfederated searchexploits user intent to personalize vertical order. For instance, for the same query like "software engineer", depending on whether a searcher has hiring or job seeking intent, he or she is served with either people or jobs as the primary vertical. Within each vertical, e.g., people search, result rankings are also personalized by taking into account the similarity and social relationships between searchers and results.Mark Zuckerberg, founder of Facebook, believed that people only have one identity. E. Pariser argues that is completely false and search personalization is just another way to prove that isn't true. Although personalized search may seem helpful, it is not a very accurate representation of any person. There are instances where people also search things and share things in order to make themselves look better. For example, someone may look up and share political articles and other intellectual articles. There are many sites being used for different purposes and that do not make up one person'sidentityat all, but provide false representations instead.[16]
|
https://en.wikipedia.org/wiki/Personalized_search
|
Product findersareinformation systemsthat help consumers to identify products within a large palette of similar alternative products. Product finders differ in complexity, the more complex among them being a special case ofdecision support systems. Conventional decision support systems, however, aim at specialized user groups, e.g. marketing managers, whereas product finders focus on consumers.
Usually, product finders are part of ane-shopor an online presentation of a product-line. Being part of an e-shop, a product finder ideally leads to an online buy, while conventional distribution channels are involved in product finders that are part of an online presentation (e.g. shops, order by phone).
Product finders are best suited for product groups whose individual products are comparable by specific criteria. This is true, in most cases, with technical products such asnotebooks: their features (e.g.clock rate, size ofharddisk, price, screen size) may influence the consumer's decision.
Beside technical products such as notebooks, cars, dish washers, cell phones orGPSdevices, non-technical products such as wine, socks, toothbrushes or nails may be supported by product finders as well, as comparison by features takes place.
On the other hand, the application of product finders is limited when it comes to individualized products such as books, jewelry or compact discs as consumers do not select such products along specific, comparable features.
Furthermore, product finders are used not only for products sensu stricto, but for services as well, e.g. account types of a bank, health insurance, or communication providers. In these cases, the termservice finderis used sometimes.
Product finders are used both by manufacturers, dealers (comprising several manufacturers), and web portals (comprising several dealers).
There is a move to integrate Product finders withsocial networkingandgroup buyingallowing users to add and rate products, locations and purchase recommended products with others.
Technical implementations differ in their benefit for the consumers. The following list displays the main approaches, from simple ones to more complex ones, each with a typical example:
Product finder has an important role ine-commerce, items has to be categorized to better serve consumer in searching the desired product,recommender systemfor recommending items based on their purchases etc.
As people are moving from offline to online commerce (e-commerce), it is getting more difficult and cumbersome to deal with the large amount of data about items, people that need to be kept and analyzed in order to better serve consumer. Large amount of data cannot be handled by just using man power, we need machine to do these things for us, they can deal with large amount of data efficiently and effectively.
Online commerce has gained a lot of popularity over the past decade. Large online consumer to consumer marketplaces such aseBay,Amazon, andAlibabafeature millions of items with more entered into the marketplace every day. Item categorization helps in classifying products and giving themtagsandlabels, which helps consumer find them.
Traditionallybag-of-words modelapproach is used to solve the problem with using nohierarchyat all or using human-defined hierarchy.
A new method,[4]using hierarchical approach which decomposes theclassificationproblem into a coarse level task and a fine level task, with the hierarchy made usinglatent class modeldiscovery. A simple classifier is applied to perform the coarse level classification (because the data is so large we cannot use more sophisticated approach due to time issue) while a more sophisticated model is used to separate classes at the fine level.
Highlights/Methods used:
The problem faced by these online e-commerce companies are:
Recommendation systems are used to recommend consumer items/product based on their purchasing or search history.
|
https://en.wikipedia.org/wiki/Product_finder
|
Areview siteis awebsiteon which reviews can be posted about people, businesses, products, or services. These sites may useWeb 2.0techniques to gather reviews from site users or may employ professional writers to author reviews on the topic of concern for the site.
Early examples of review sites included ConsumerDemocracy.com, Complaints.com, planetfeedback.com,[1]Epinions.com[2]andThatGuyWithTheGlasses.com(later rebranded to Channel Awesome in 2014).[3]
Review sites are generally supported by advertising. Some business review sites may also allow businesses to pay for enhanced listings, which do not affect the reviews and ratings. Product review sites may be supported by providingaffiliatelinks to the websites that sell the reviewed items, which pay the site on a per-click or per-sale basis.
With the growing popularity of affiliate programs on theInternet, a new sort of review site has emerged: the affiliate product review site. This type of site is usually professionally designed and written to maximize conversions, and is used by e-commerce marketers. It is often based on ablogplatform likeWordPressorSquarespace, has a privacy and contact page to help withSEO, and has commenting and interactivity turned off. It will also have an e-mail gathering device in the form of anopt-in, ordrop-down listto help the aspiringe-commercebusiness person build ane-mail listto market to.
Because of the specialized marketing thrust of this type of website, the reviews are not always seen to be objective by consumers. Because of this, the FTC has provided several guidelines requiring publishers to disclose when they benefit monetarily from the content in the form of advertising, affiliate marketing, etc.[4]
Studies by independent research groups show that rating and review sites influence consumer shopping behavior.[citation needed]In an academic study published in 2008, empirical results demonstrated that the number of online user reviews is a good indicator of the intensity of underlying word-of-mouth effect and increase awareness.[5]
Originally, reviews were generally anonymous, and in many countries, review sites often have policies that preclude the release of any identifying information without a court order. According to Kurt Opsahl, a staff attorney for theElectronic Frontier Foundation(EFF), anonymity of reviewers is important.[6]
Reviewers are always required to provide an email address and are often encouraged to use their real name. Yelp also requires a photo of the reviewer.[7]
Arating site(commonly known as arate-me site) is awebsitedesigned forusersto vote, rate people,content, or other things. Rating sites can range from tangible to non-tangible attributes, but most commonly, rating sites are based around physical appearances such as body parts, voice, personality, etc. They may also be devoted to the subjects' occupational ability, for example teachers, professors, lawyers, doctors, etc. Rating sites can typically be on anything a user can think of.[8]
Rating sites typically show a series of images (or other content) in random fashion, or chosen by computer algorithm, instead of allowing users to choose. Users are given a choice of rating or assessment, which is generally done quickly and without great deliberation. Users score items on a scale of 1 to 10, yes or no. Others, such as BabeVsBabe.com, ask users tochoose between two pictures. Typically, the site gives instant feedback in terms of the item's running score, or the percentage of other users who agree with the assessment. Rating sites sometimes offer aggregate statistics or "best" and "worst" lists. Most allow users to submit their own image, sample, or other relevant content for others to rate. Some require the submission as a condition of membership.
Rating sites usually provide some features ofsocial network servicesandonline communitiessuch asdiscussion forumsmessaging, andprivate messaging. Some function as a form ofdating service, in that for a fee they allow users to contact other users. Many social networks and other sites include rating features. For example,MySpaceand TradePics have optional "rank" features for users to be rated by other users.
One category of rating sites, such asHot or Notor HotFlation, is devoted to rating contributors' physical attractiveness. Other looks-based rating sites include RateMyFace.com (an early site, launched in the Summer of 1999) and NameMyVote, which asks users to guess a person's political party based on their looks. Some sites are devoted to rating the appearance of pets (e.g. kittenwar.com, petsinclothes.com, and meormypet.com). Another class allows users to rate short video or music clips. One variant, a "Darwinian poetry" site, allows users to compare two samples of entirely computer-generated poetry using aCondorcet method. Successful poems "mate" to produce poems of ever-increasing appeal. Yet others are devoted to disliked men (DoucheBagAlert),bowel movements(ratemypoo.com), unsigned bands (RateBandsOnline.com), politics (RateMyTory.Com), nightclubs, business professionals, clothes, cars, and many other subjects.
When rating sites are dedicated to rating products (epinions.com), brands (brandmojo.org), services, or businesses rather than to rating people (i-rate.me), and are used for more serious or well thought-out ratings, they tend to be called review sites, although the distinction is not exact.
The popularity of rating people and their abilities on a scale, such as 1–10, traces back to at least the late 20th century, and the algorithms for aggregating quantitative rating scores far earlier than that. The 1979 film10is an example of this. The title derives from a rating systemDudley Mooreuses to grade women based uponbeauty, with a 10 being the epitome of attractiveness. The notion of a "perfect ten" came into common usage as a result of this film.[citation needed]In the film, Moore ratesBo Derekan "11".
In 1990, one of the first computer-based photographic attractiveness rating studies was conducted. During this year psychologists J. H. Langlois and L. A. Roggman examined whether facial attractiveness was linked to geometric averageness. To test their hypothesis, they selected photographs of 192 male and female Caucasian faces; each of which was computer scanned and digitized. They then made computer-processed composites of each image, as 2-, 4-, 8-, 16-, and 32-face composites. The individual and composite faces were then rated for attractiveness by 300 judges on a 5-pointLikert scale(1 = very unattractive, 5 = very attractive). The 32-composite face was the most visually attractive of all the faces.[9]Subsequent studies were done on a 10-point scale.
In 1992,Perfect 10magazine and video programming was launched by Xui, the original executive editor ofSpinmagazine, to feature only women who would rank 10 for attractiveness. Julie Kruis, a swimsuit model, was the originalspokesmodel. In 1996, Rasen created the first "Perfect 10Model Search" at the Pure Platinum club nearFort Lauderdale, Florida. His contests were broadcast on Network 1, a domesticC-bandsatellite channel. Other unrelated "Perfect 10" contests became popular throughout the 1990s.
The first ratings sites started in 1999, with RateMyFace.com (created by Michael Hussey) and TeacherRatings.com (created by John Swapceinski, re-launched with Hussey and further developed by Patrick Nagle asRateMyProfessors). The most popular of all time, Hot or Not, was launched in October 2000. Hot or Not generated many spin-offs and imitators. There are now hundreds of such sites, and even meta-sites that categorize them all. The rating site concept has also been expanded to include Twitter and Facebook accounts that provide ratings, such as the humorous Twitter accountWeRateDogs.
Most review sites make little or no attempt to restrict postings, or to verify the information in the reviews. Critics point out that positive reviews are sometimes written by the businesses or individuals being reviewed, while negative reviews may be written by competitors, disgruntled employees, or anyone with a grudge against the business being reviewed. Some merchants also offer incentives for customers to review their products favorably, which skews reviews in their favor.[10]So calledreputation managementfirms may also submit false positive reviews on behalf of businesses. In 2011,RateMDs.comandYelpdetected dozens of positive reviews of doctors, submitted from the same IP addresses by a firm called Medical Justice.[11]
Furthermore, studies of research methodology have shown that in forums where people are able to post opinions publicly, group polarization often occurs, and the result is very positive comments, very negative comments, and little in between, meaning that those who would have been in the middle are either silent or pulled to one extreme or the other.[12]
Rating sites have a social feedback effect; some high school principals and administrators, for example, have begun to regularly monitor the status of their teaching staff via student controlled "rating sites". Some looks-based sites have come under criticism for promoting vanity and self-consciousness. Some claim they potentially expose users tosexual predators.
Most rating sites suffer from similarself-selection biassince only highly motivated individuals devote their time to completing these rankings, and not a fair sampling of the population.
Many operators of review sites acknowledge that reviews may not be objective, and that ratings may not be statistically valid.
In some cases, government authorities have taken legal actions against businesses that post false reviews. In 2009, the State of New York required Lifestyle Lift, a cosmetic surgery company, to pay $300,000 in fines.[13]
|
https://en.wikipedia.org/wiki/Rating_site
|
Reputation management,refers to theinfluencing, controlling, enhancing, or concealing of an individual's or group'sreputation. It is a marketing technique used to modify a person's or a company's reputation in a positive way.[1]The growth of theinternetandsocial medialed to growth of reputationmanagementcompanies, withsearch resultsas a core part of a client's reputation.[2]Online reputation management (ORM) involves overseeing and influencing the search engine results related to products and services.[3]
Ethical grey areas includemug shot removal sites,astroturfingcustomer reviewsites, censoring complaints, and usingsearch engine optimizationtactics toinfluence results.In other cases, the ethical lines are clear; some reputation management companies are closely connected to websites that publish unverified and libelous statements about people.[4]Such unethical companies charge thousands of dollars to remove these posts – temporarily – from their websites.[4]
The field of public relations has evolved with the rise of the internet and social media. Reputation management is now broadly categorized into two areas: online reputation management and offline reputation management.
Online reputation management focuses on the management of product and service search results within the digital space. A variety of electronic markets and online communities likeeBay,AmazonandAlibabahave ORM systems built in, and using effective control nodes can minimize the threat and protect systems from possible misuses and abuses by malicious nodes in decentralized overlay networks.[5]Big Datahas the potential to be employed in overseeing and enhancing the reputation of organizations.[6]
Offline reputation management shapes public perception of a said entity outside the digital sphere.[7]Popular controls for off-line reputation management include social responsibility, media visibility, press releases inprint mediaandsponsorshipamongst related tools.[8]
Reputation is asocial constructbased on the opinion other people hold about a person or thing. Before the internet was developed, consumers wanting to learn about a company had fewer options. They had access to resources such as the Yellow Pages, but mostly relied onword-of-mouth. A company's reputation depended on personal experience.[citation needed]A company while it grew and expanded was subject to the market's perception of the brand. Public relations were developed to manage the image and manage the reputation of a company or individual.[citation needed]The concept was initially created to broaden public relations outside of media relations.[9]Academic studies have identified it as a driving force behindFortune 500corporate public relations since the beginning of the 21st century.[10]
As of 1988, reputation management was acknowledged as a valuableintangible assetand corporate necessity, which can be one of the most important sources of competitive edge in a fiercely competitive market,[11]and with firms under scrutiny from the business community, regulators[vague], and corporate governance watchdogs; good reputation management practices would to help firms cope with this scrutiny.[12]
As of 2006, reputation management practices reinforce and aid a corporation's branding objectives. Good reputation management practices are helping any entity manage staff confidence as a control tool on public perceptions which if undermined and ignored can be costly, which in the long run may cripple employee confidence, a risk no employer would dare explore as staff morale is one of the most important drivers of company performance.[13]
Originally, public relations includedprinted media, events and networking campaigns. At the end of 1990ssearch enginesbecame widely used. The popularity of the internet introduced new marketing and branding opportunities. Where once journalists were the main source of media content,blogs, review sites and social media gave a voice toconsumersregardless of qualification. Public relations became part of online reputation management (ORM). ORM includes traditional reputation strategies of public relations but also focuses on building a long-term reputation strategy that is consistent across all web-based channels and platforms. ORM includes search engine reputation management which is designed to counter negative search results and elevate positive content.[14][15]Reputation management (sometimes referred to asrep managementorORM) is the practice of attempting to shape public perception of a person or organization by influencing information about that entity, primarily online.[16]What necessitates this shaping of perceptions being the role of consumers in any organization and the cognizance of how much if ignored these perceptions may harm a company's performance at any time of the year, a risk no entrepreneur or company executive can afford.[17]
Specifically, reputation management involves the monitoring of the reputation of an individual or a brand on the internet, primarily focusing on the various social media platforms such asFacebook,Instagram,YouTube, etc. addressing content which is potentially damaging to it, and using customer feedback to try to solve problems before they damage the individual's or brand's reputation.[18]A major part of reputation management involves suppressing negative search results, while highlighting positive ones.[19]For businesses, reputation management usually involves an attempt to bridge the gap between how a company perceives itself and how others view it.[20]
In 2012, there had been an article released titled "Social Media Research in Advertising, Communication, Marketing and Public Relations" written by Hyoungkoo Khang et-al.[21]The references to Kaplan and Haenleins theory ofsocial presence, highlights the "concept of self-presentation."[22]
Khang highlights that "companies must monitor individual's comments regarding service 24/7."[23]This can imply that the reputation of a company does essentially rely on the consumer, as they are the ones that can make or break it. A 2015 study commissioned by theAmerican Association of Advertising Agenciesconcluded that 4 percent of consumers believed advertisers and marketers practice integrity.[24]
According toSusan Crawford, acyberlawspecialist fromCardozo Law School, most websites will remove negative content when contacted to avoid litigation.The Wall Street Journalnoted that in some cases, writing a letter to a detractor can have unintended consequences, though the company makes an effort to avoid writing to certain website operators that are likely to respond negatively. The company says it respects theFirst Amendmentand does not try to remove "genuinely newsworthy speech." It generally cannot remove major government-related news stories from established publications or court records.[25][26]
In 2015,Jon Ronson, author of"So You've Been Publicly Shamed", said that reputation management helped some people who becameagoraphobicdue topublic humiliationfromonline shaming, but that it was an expensive service that many could not afford.[27][28]
In 2011, controversy around theTaco Bellrestaurant chain arose when public accusations were made that their "seasoned beef" product was only made up of only 35% real beef. Aclass actionlawsuit was filed by the law firm Beasley Allen against Taco Bell. The suit was voluntarily withdrawn with Beasley Allen citing that "From the inception of this case, we stated that if Taco Bell would make certain changes regarding disclosure and marketing of its 'seasoned beef' product, the case could be dismissed."[29][30]Taco Bell responded to the case being withdrawn by launching a reputation management campaign titled "Would it kill you to say you're sorry?" that ran advertisements in various news outlets in print and online, which attempted to draw attention to the voluntary withdrawal of the case.[31]
In 2015,Volkswagen, a German automobile manufacturer, faced a massive €30 billion controversy. A scandal erupted when it was revealed that 11 million of its vehicles globally had been fitted with devices designed to mask the true levels of harmful emissions. The reaction from the company's investors was swift as Volkswagen's stock value started to fall rapidly.[32]The brand released a two-minute video in which the CEO and other representatives apologized after pleading guilty. However, this wasn't enough to change the public perception. The automotive giant had to bring in four PR firms led by Hering Schuppener, a German crisis communications and reputation management agency.[33]To rebuild its reputation, Volkswagen launched an initiative to transition to electric motors on an unprecedented scale. The company released print media and published pieces in top publications to show its commitment to developing electric and hybrid vehicle models worldwide, which helped improve its CSR image.[33]
Starbucks, the coffeehouse chain, also faced reputation damage in response to the arrests of two African-American men at its Philadelphia branch. In response to a request to use the bathroom, the branch's manager denied the two men's access since they hadn't bought anything, calling the police when they refused to leave.[34]The incident sparked massive public outrage and boycotts across the country.[35]SYPartners, a business reputation consultancy, was engaged to provide Starbucks leadership with advice after the incident. Starbucks issued an apology, which was circulated across top media publications.[36]The company also initiated ananti-bias trainingfor its 175,00 employees across 8,000 locations.[37]Starbucks also changed its policy, allowing people to sit without making a purchase. Both men also reached a settlement with Starbucks and the city.[34]
In 2024, a London restaurant wasreview bombedby acybercrimegroup to extort £10,000. The negative reviews brought the eatery's Google rating down to 2.3 stars from a 4.9 stars before the attack.[38]Maximatic Media, an online reputation management firm, was hired to identify the origin of the malicious reviews and found that they were being generated by abotnet. The agency worked with Google for the removal of these fake reviews to restore the restaurant's online reputation to a 4.8-star rating.[39]
Organisations attempt to manage their reputations on websites that many people visit, such aseBay,[40]Wikipedia, andGoogle. Some of the tactics used by reputation management firms include:[41]
The practice of reputation management raises many ethical questions.[44][49]It is widely disagreed upon where the line for disclosure, astroturfing, and censorship should be drawn. Firms have been known to hire staff to pose as bloggers on third-party sites without disclosing they were paid, and some have been criticized for asking websites to remove negative posts.[14][42]The exposure of unethical reputation management may itself be risky to the reputation of a firm that attempts it if known.[50]
In 2007 Google declared there to be nothing inherently wrong with reputation management,[43]and even introduced a toolset in 2011 for users to monitor their online identity and request the removal of unwanted content.[51]Many firms are selective about clients they accept. For example, they may avoid individuals who committed violent crimes who are looking to push information about their crimes lower on search results.[44]
In 2010, a study showed thatNaymz, one of the first Web 2.0 services to provide utilities for Online Reputation Management (ORM), had developed a method to assess the online reputation of its members (RepScore) that was rather easy to deceive. The study found that the highest level of online reputation was easily achieved by engaging a small social group of nine persons who connect with each other and provide reciprocal positive feedbacks and endorsements.[52]As of December 2017, Naymz was shut down.
In 2015, the online retailerAmazon.comsued 1,114 people who were paid to publish fake five-star reviews for products. These reviews were created using a website forMacrotasking,Fiverr.com.[53][54][55]Several other companies offer fakeYelpandFacebookreviews, and one journalist amassed five-star reviews for a business that doesn't exist, from social media accounts that have also given overwhelmingly positive reviews to "a chiropractor inArizona, a hair salon inLondon, a limo company inNorth Carolina, a realtor inTexas, and a locksmith inFlorida, among other far-flung businesses".[56]In 2007, a study by theUniversity of California Berkeleyfound that some sellers oneBaywere undertaking reputation management by selling products at a discount in exchange forpositive feedbacktogame the system.[57]
In 2016, theWashington Postdetailed 25 court cases, at least 15 of which had false addresses for the defendant. The court cases had similar language and the defendant agreed to the injunction by the plaintiff, which allowed the reputation management company to issue takedown notices to Google, Yelp, Leagle, Ripoff Report, various news sites, and other websites.[58]
|
https://en.wikipedia.org/wiki/Reputation_management
|
Theforward–backward algorithmis aninferencealgorithmforhidden Markov modelswhich computes theposteriormarginalsof all hidden state variables given a sequence of observations/emissionso1:T:=o1,…,oT{\displaystyle o_{1:T}:=o_{1},\dots ,o_{T}}, i.e. it computes, for all hidden state variablesXt∈{X1,…,XT}{\displaystyle X_{t}\in \{X_{1},\dots ,X_{T}\}}, the distributionP(Xt|o1:T){\displaystyle P(X_{t}\ |\ o_{1:T})}. This inference task is usually calledsmoothing. The algorithm makes use of the principle ofdynamic programmingto efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the nameforward–backward algorithm.
The termforward–backward algorithmis also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.
In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for allt∈{1,…,T}{\displaystyle t\in \{1,\dots ,T\}}, the probability of ending up in any particular state given the firstt{\displaystyle t}observations in the sequence, i.e.P(Xt|o1:t){\displaystyle P(X_{t}\ |\ o_{1:t})}. In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting pointt{\displaystyle t}, i.e.P(ot+1:T|Xt){\displaystyle P(o_{t+1:T}\ |\ X_{t})}. These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:
The last step follows from an application of theBayes' ruleand theconditional independenceofot+1:T{\displaystyle o_{t+1:T}}ando1:t{\displaystyle o_{1:t}}givenXt{\displaystyle X_{t}}.
As outlined above, the algorithm involves three steps:
The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to themessage-passingused in generalbelief propagationapproaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results.
The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (seeViterbi algorithm).
The following description will use matrices of probability values instead of probability distributions. However, it is important to note that the forward-backward algorithm can generally be applied to both continuous and discrete probability models.
We transform the probability distributions related to a givenhidden Markov modelinto matrix notation as follows.
The transition probabilitiesP(Xt∣Xt−1){\displaystyle \mathbf {P} (X_{t}\mid X_{t-1})}of a given random variableXt{\displaystyle X_{t}}representing all possible states in the hidden Markov model will be represented by the matrixT{\displaystyle \mathbf {T} }where the column indexj{\displaystyle j}will represent the target state and the row indexi{\displaystyle i}represents the start state. A transition from row-vector stateπt{\displaystyle \mathbf {\pi _{t}} }to the incremental row-vector stateπt+1{\displaystyle \mathbf {\pi _{t+1}} }is written asπt+1=πtT{\displaystyle \mathbf {\pi _{t+1}} =\mathbf {\pi _{t}} \mathbf {T} }. The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then:
In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form:
provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system (π{\displaystyle \mathbf {\pi } }), the probability of observing event j is then:
The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector (π{\displaystyle \mathbf {\pi } }) with an observation matrix (Oj=diag(B∗,oj){\displaystyle \mathbf {O_{j}} =\mathrm {diag} (B_{*,o_{j}})}) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be:
This allows us to calculate the new unnormalized probabilities state vectorπ′{\displaystyle \mathbf {\pi '} }through Bayes rule, weighting by the likelihood that each element ofπ{\displaystyle \mathbf {\pi } }generated event 1 as:
We can now make this general procedure specific to our series of observations. Assuming an initial state vectorπ0{\displaystyle \mathbf {\pi } _{0}}, (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin withf0:0=π0{\displaystyle \mathbf {f_{0:0}} =\mathbf {\pi } _{0}}, then updating the state distribution and weighting by the likelihood of the first observation:
This process can be carried forward with additional observations using:
This value is the forward unnormalizedprobability vector. The i'th entry of this vector provides:
Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that:
wheref^0:t−1{\displaystyle \mathbf {{\hat {f}}_{0:t-1}} }represents the scaled vector from the previous step andct{\displaystyle c_{t}}represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states:
This allows us to interpret the scaled probability vector as:
We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time.
A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities:
That is, we now want to assume that we start in a particular state (Xt=xi{\displaystyle X_{t}=x_{i}}), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with:
Notice that we are now using acolumn vectorwhile the forward probabilities used row vectors. We can then work backwards using:
While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the samect{\displaystyle c_{t}}constants used in the forward probability calculations.bT:T{\displaystyle \mathbf {b_{T:T}} }is not scaled, but subsequent operations use:
whereb^t:T{\displaystyle \mathbf {{\hat {b}}_{t:T}} }represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by:
This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values:
To understand this, we note thatf0:t(i)⋅bt:T(i){\displaystyle \mathbf {f_{0:t}} (i)\cdot \mathbf {b_{t:T}} (i)}provides the probability for observing the given events in a way that passes through statexi{\displaystyle x_{i}}at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability thatXt=xi{\displaystyle X_{t}=x_{i}}. These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability.
The valuesγt(i){\displaystyle \mathbf {\gamma _{t}} (i)}thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e.P(Xt=xi,Xt+1=xj)≠P(Xt=xi)P(Xt+1=xj){\displaystyle \mathbf {P} (X_{t}=x_{i},X_{t+1}=x_{j})\neq \mathbf {P} (X_{t}=x_{i})\mathbf {P} (X_{t+1}=x_{j})}. The most probable sequence of states that produced an observation sequence can be found using theViterbi algorithm.
This example takes as its basis the umbrella world inRussell & Norvig 2010 Chapter 15 pp. 567in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then:
We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix:
We then observe the following sequence of events: {umbrella, umbrella, no umbrella, umbrella, umbrella} which we will represent in our calculations as:
Note thatO3{\displaystyle \mathbf {O_{3}} }differs from the others because of the "no umbrella" observation.
In computing the forward probabilities we begin with:
which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form:
instead of:
Notice that thetransformation matrixis also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides:
For the backward probabilities, we start with:
We are then able to compute (using the observations in reverse order and normalizing with different constants):
Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with thect{\displaystyle c_{t}}'s found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time.
Notice that the value ofγ0{\displaystyle \mathbf {\gamma _{0}} }is equal tob^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }and thatγ5{\displaystyle \mathbf {\gamma _{5}} }is equal tof^0:5{\displaystyle \mathbf {{\hat {f}}_{0:5}} }. This follows naturally because bothf^0:5{\displaystyle \mathbf {{\hat {f}}_{0:5}} }andb^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However,γ0{\displaystyle \mathbf {\gamma _{0}} }will only be equal tob^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the caseb^0:5{\displaystyle \mathbf {{\hat {b}}_{0:5}} }needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points.
The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value atγ5{\displaystyle \mathbf {\gamma _{5}} }quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella.
The forward–backward algorithm runs with time complexityO(S2T){\displaystyle O(S^{2}T)}in spaceO(ST){\displaystyle O(ST)}, whereT{\displaystyle T}is the length of the time sequence andS{\displaystyle S}is the number of symbols in the state alphabet.[1]The algorithm can also run in constant space with time complexityO(S2T2){\displaystyle O(S^{2}T^{2})}by recomputing values at each step.[2]For comparison, abrute-force procedurewould generate all possibleST{\displaystyle S^{T}}state sequences and calculate the joint probability of each state sequence with the observed series of events, which would havetime complexityO(T⋅ST){\displaystyle O(T\cdot S^{T})}. Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high.
An enhancement to the general forward-backward algorithm, called theIsland algorithm, trades smaller memory usage for longer running time, takingO(S2TlogT){\displaystyle O(S^{2}T\log T)}time andO(SlogT){\displaystyle O(S\log T)}memory. Furthermore, it is possible to invert the process model to obtain anO(S){\displaystyle O(S)}space,O(S2T){\displaystyle O(S^{2}T)}time algorithm, although the inverted process may not exist or beill-conditioned.[3]
In addition, algorithms have been developed to computef0:t+1{\displaystyle \mathbf {f_{0:t+1}} }efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm.[4]
Given HMM (just like inViterbi algorithm) represented in thePython programming language:
We can write the implementation of the forward-backward algorithm like this:
The functionfwd_bkwtakes the following arguments:xis the sequence of observations, e.g.['normal', 'cold', 'dizzy'];statesis the set of hidden states;a_0is the start probability;aare the transition probabilities;
andeare the emission probabilities.
For simplicity of code, we assume that the observation sequencexis non-empty and thata[i][j]ande[i][j]is defined for all states i,j.
In the running example, the forward-backward algorithm is used as follows:
|
https://en.wikipedia.org/wiki/Forward-backward_algorithm
|
Theforward algorithm, in the context of ahidden Markov model(HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known asfiltering. The forward algorithm is closely related to, but distinct from, theViterbi algorithm.
The forward and backward algorithms should be placed within the context of probability as they appear to simply be names given to a set of standard mathematical procedures within a few fields. For example, neither "forward algorithm" nor "Viterbi" appear in the Cambridge encyclopedia of mathematics. The main observation to take away from these algorithms is how to organize Bayesian updates and inference to be computationally efficient in the context of directed graphs of variables (seesum-product networks).
For an HMM such as this one:
this probability is written asp(xt|y1:t){\displaystyle p(x_{t}|y_{1:t})}. Herex(t){\displaystyle x(t)}is the hidden state which is abbreviated asxt{\displaystyle x_{t}}andy1:t{\displaystyle y_{1:t}}are the observations1{\displaystyle 1}tot{\displaystyle t}.
The backward algorithm complements the forward algorithm by taking into account the future history if one wanted to improve the estimate for past times. This is referred to assmoothingand theforward/backward algorithmcomputesp(xt|y1:T){\displaystyle p(x_{t}|y_{1:T})}for1<t<T{\displaystyle 1<t<T}. Thus, the full forward/backward algorithm takes into account all evidence. Note that a belief state can be calculated at each time step, but doing this does not, in a strict sense, produce the most likely statesequence, but rather the most likely state at each time step, given the previous history. In order to achieve the most likely sequence, theViterbi algorithmis required. It computes the most likely state sequence given the history of observations, that is, the state sequence that maximizesp(x0:t|y0:t){\displaystyle p(x_{0:t}|y_{0:t})}.
The goal of the forward algorithm is to compute thejoint probabilityp(xt,y1:t){\displaystyle p(x_{t},y_{1:t})}, where for notational convenience we have abbreviatedx(t){\displaystyle x(t)}asxt{\displaystyle x_{t}}and(y(1),y(2),...,y(t)){\displaystyle (y(1),y(2),...,y(t))}asy1:t{\displaystyle y_{1:t}}. Once the joint probabilityp(xt,y1:t){\displaystyle p(x_{t},y_{1:t})}is computed, the other probabilitiesp(xt|y1:t){\displaystyle p(x_{t}|y_{1:t})}andp(y1:t){\displaystyle p(y_{1:t})}are easily obtained.
Both the statext{\displaystyle x_{t}}and observationyt{\displaystyle y_{t}}are assumed to be discrete, finite random variables. The hidden Markov model's state transition probabilitiesp(xt|xt−1){\displaystyle p(x_{t}|x_{t-1})}, observation/emission probabilitiesp(yt|xt){\displaystyle p(y_{t}|x_{t})}, and initial prior probabilityp(x0){\displaystyle p(x_{0})}are assumed to be known. Furthermore, the sequence of observationsy1:t{\displaystyle y_{1:t}}are assumed to be given.
Computingp(xt,y1:t){\displaystyle p(x_{t},y_{1:t})}naively would requiremarginalizingover all possible state sequences{x1:t−1}{\displaystyle \{x_{1:t-1}\}}, the number of which grows exponentially witht{\displaystyle t}. Instead, the forward algorithm takes advantage of theconditional independencerules of thehidden Markov model(HMM) to perform the calculation recursively.
To demonstrate the recursion, let
Using thechain ruleto expandp(xt,xt−1,y1:t){\displaystyle p(x_{t},x_{t-1},y_{1:t})}, we can then write
Becauseyt{\displaystyle y_{t}}is conditionally independent of everything butxt{\displaystyle x_{t}}, andxt{\displaystyle x_{t}}is conditionally independent of everything butxt−1{\displaystyle x_{t-1}}, this simplifies to
Thus, sincep(yt|xt){\displaystyle p(y_{t}|x_{t})}andp(xt|xt−1){\displaystyle p(x_{t}|x_{t-1})}are given by the model'semission distributionsandtransition probabilities, which are assumed to be known, one can quickly calculateα(xt){\displaystyle \alpha (x_{t})}fromα(xt−1){\displaystyle \alpha (x_{t-1})}and avoid incurring exponential computation time.
The recursion formula given above can be written in a more compact form. Letaij=p(xt=i|xt−1=j){\displaystyle a_{ij}=p(x_{t}=i|x_{t-1}=j)}be the transition probabilities andbij=p(yt=i|xt=j){\displaystyle b_{ij}=p(y_{t}=i|x_{t}=j)}be the emission probabilities, then
whereA=[aij]{\displaystyle \mathbf {A} =[a_{ij}]}is the transition probability matrix,bt{\displaystyle \mathbf {b} _{t}}is the i-th row of the emission probability matrixB=[bij]{\displaystyle \mathbf {B} =[b_{ij}]}which corresponds to the actual observationyt=i{\displaystyle y_{t}=i}at timet{\displaystyle t}, andαt=[α(xt=1),…,α(xt=n)]T{\displaystyle \mathbf {\alpha } _{t}=[\alpha (x_{t}=1),\ldots ,\alpha (x_{t}=n)]^{T}}is the alpha vector. The⊙{\displaystyle \odot }is thehadamard productbetween the transpose ofbt{\displaystyle \mathbf {b} _{t}}andAαt−1{\displaystyle \mathbf {A} \mathbf {\alpha } _{t-1}}.
The initial condition is set in accordance to the prior probability overx0{\displaystyle x_{0}}as
Once the joint probabilityα(xt)=p(xt,y1:t){\displaystyle \alpha (x_{t})=p(x_{t},y_{1:t})}has been computed using the forward algorithm, we can easily obtain the related joint probabilityp(y1:t){\displaystyle p(y_{1:t})}as
and the required conditional probabilityp(xt|y1:t){\displaystyle p(x_{t}|y_{1:t})}as
Once the conditional probability has been calculated, we can also find the point estimate ofxt{\displaystyle x_{t}}. For instance, the MAP estimate ofxt{\displaystyle x_{t}}is given by
while the MMSE estimate ofxt{\displaystyle x_{t}}is given by
The forward algorithm is easily modified to account for observations from variants of the hidden Markov model as well, such as theMarkov jump linear system.
This example on observing possible states of weather from the observed condition of seaweed. We have observations of seaweed for three consecutive days as dry, damp, and soggy in order. The possible states of weather can be sunny, cloudy, or rainy. In total, there can be33=27{\displaystyle 3^{3}=27}such weather sequences. Exploring all such possible state sequences is computationally very expensive. To reduce this complexity, Forward algorithm comes in handy, where the trick lies in using the conditional independence of the sequence steps to calculate partial probabilities,α(xt)=p(xt,y1:t)=p(yt|xt)∑xt−1p(xt|xt−1)α(xt−1){\displaystyle \alpha (x_{t})=p(x_{t},y_{1:t})=p(y_{t}|x_{t})\sum _{x_{t-1}}p(x_{t}|x_{t-1})\alpha (x_{t-1})}as shown in the above derivation. Hence, we can calculate the probabilities as the product of the appropriate observation/emission probability,p(yt|xt){\displaystyle p(y_{t}|x_{t})}( probability of stateyt{\displaystyle y_{t}}seen at time t from previous observation) with the sum of probabilities of reaching that state at time t, calculated using transition probabilities. This reduces complexity of the problem from searching whole search space to just using previously computedα{\displaystyle \alpha }'s and transition probabilities.
Complexity of Forward Algorithm isΘ(nm2){\displaystyle \Theta (nm^{2})}, wherem{\displaystyle m}is the number of hidden or latent variables, like weather in the example above, andn{\displaystyle n}is the length of the sequence of the observed variable. This is clear reduction from the adhoc method of exploring all the possible states with a complexity ofΘ(nmn){\displaystyle \Theta (nm^{n})}.
The forward algorithm is one of the algorithms used to solve the decoding problem. Since the development ofspeech recognition[4]and pattern recognition and related fields likecomputational biologywhich use HMMs, the forward algorithm has gained popularity.
The forward algorithm is mostly used in applications that need us to determine the probability of being in a specific state when we know about the sequence of observations. The algorithm can be applied wherever we can train a model as we receive data using Baum-Welch[5]or any general EM algorithm. The Forward algorithm will then tell us about the probability of data with respect to what is expected from our model. One of the applications can be in the domain of Finance, where it can help decide on when to buy or sell tangible assets.
It can have applications in all fields where we apply Hidden Markov Models. The popular ones include Natural language processing domains like tagging part-of-speech and speech recognition.[4]Recently it is also being used in the domain of Bioinformatics.
Forward algorithm can also be applied to perform Weather speculations. We can have a HMM describing the weather and its relation to the state of observations for few consecutive days (some examples could be dry, damp, soggy, sunny, cloudy, rainy etc.). We can consider calculating the probability of observing any sequence of observations recursively given the HMM. We can then calculate the probability of reaching an intermediate state as the sum of all possible paths to that state. Thus the partial probabilities for the final observation will hold the probability of reaching those states going through all possible paths.
|
https://en.wikipedia.org/wiki/Forward_algorithm
|
Incomputing,telecommunication,information theory, andcoding theory,forward error correction(FEC) orchannel coding[1][2][3]is a technique used forcontrolling errorsindata transmissionover unreliable or noisycommunication channels.
The central idea is that the sender encodes the message in aredundantway, most often by using anerror correction code, orerror correcting code(ECC).[4][5]The redundancy allows the receiver not only todetect errorsthat may occur anywhere in the message, but often to correct a limited number of errors. Therefore areverse channelto request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth.
The American mathematicianRichard Hammingpioneered this field in the 1940s and invented the first error-correcting code in 1950: theHamming (7,4) code.[5]
FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers inmulticast.
Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used inmodemsand incellular networks.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initialanalog-to-digital conversionin the receiver. TheViterbi decoderimplements asoft-decision algorithmto demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate abit-error rate(BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
FEC information is added tomass storage(magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used asECCcomputer memoryon systems that require special provisions for reliability.
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effectivesignal-to-noise ratio. Thenoisy-channel coding theoremofClaude Shannoncan be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems likepolar code[3]come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.
ECC is accomplished by addingredundancyto the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output aresystematic, while those that do not arenon-systematic.
A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below.
This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is:
Though simple to implement and widely used, thistriple modular redundancyis a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits).
ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data.
Most telecommunication systems use a fixedchannel codedesigned to tolerate the expected worst-casebit error rate, and then fail to work at all if the bit error rate is ever worse.
However, some systems adapt to the given channel error conditions: some instances ofhybrid automatic repeat-requestuse a fixed ECC method as long as the ECC can handle the error rate, then switch toARQwhen the error rate gets too high;adaptive modulation and codinguses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
The two main categories of ECC codes areblock codesandconvolutional codes.
There are many types of block codes;Reed–Solomon codingis noteworthy for its widespread use incompact discs,DVDs, andhard disk drives. Other examples of classical block codes includeGolay,BCH,Multidimensional parity, andHamming codes.
Hamming ECC is commonly used to correctNAND flashmemory errors.[6]This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliablesingle-level cell(SLC) NAND.
Densermulti-level cell(MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon.[7][8]NOR Flash typically does not use any error correction.[7]
Classical block codes are usually decoded usinghard-decisionalgorithms,[9]which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded usingsoft-decisionalgorithms like the Viterbi, MAP orBCJRalgorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties offinite fields. Hence classical block codes are often referred to as algebraic codes.
In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such asLDPC codeslack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates.
Mostforward error correctioncodes correct only bit-flips, but not bit-insertions or bit-deletions.
In this setting, theHamming distanceis the appropriate way to measure thebit error rate.
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
TheLevenshtein distanceis a more appropriate way to measure the bit error rate when using such codes.[10]
The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code.
The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.[11]In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection.
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:[12]His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.[13]
Classical (algebraic) block codes and convolutional codes are frequently combined inconcatenatedcoding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended.
Concatenated codes have been standard practice in satellite and deep space communications sinceVoyager 2first used the technique in its 1986 encounter withUranus. TheGalileocraft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
Low-density parity-check(LDPC) codes are a class of highly efficient linear block
codes made from many single parity check (SPC) codes. They can provide performance very close to thechannel capacity(the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel.
LDPC codes were first introduced byRobert G. Gallagerin his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction ofReed–Solomoncodes,
they were mostly ignored until the 1990s.
LDPC codes are now used in many recent high-speed communication standards, such asDVB-S2(Digital Video Broadcasting – Satellite – Second Generation),WiMAX(IEEE 802.16estandard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),[14]10GBase-T Ethernet(802.3an) andG.hn/G.9960(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within3GPPMBMS(seefountain codes).
Turbo codingis an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of theShannon limit. PredatingLDPC codesin terms of practical application, they now provide similar performance.
One of the earliest commercial applications of turbo coding was theCDMA2000 1x(TIA IS-2000) digital cellular technology developed byQualcommand sold byVerizon Wireless,Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access,1xEV-DO(TIA IS-856). Like 1x, EV-DO was developed byQualcomm, and is sold byVerizon Wireless,Sprint, and other carriers (Verizon's marketing name for 1xEV-DO isBroadband Access, Sprint's consumer and business marketing names for 1xEV-DO arePower VisionandMobile Broadband, respectively).
Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool incomputational complexity theory, e.g., for the design ofprobabilistically checkable proofs.
Locally decodable codesare error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions.Locally testable codesare error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
Not all testing codes are locally decoding and testing of codes
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)[15]neither locally correctable codes (LCCs),[16]q-query LCCs are bounded exponentially[17][18]while LDCs can havesubexponentiallengths.[19][20]
Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Manycommunication channelsare not memoryless: errors typically occur inburstsrather than independently. If the number of errors within acode wordexceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a moreuniform distributionof errors.[21]Therefore, interleaving is widely used forburst error-correction.
The analysis of modern iterated codes, liketurbo codesandLDPC codes, typically assumes an independent distribution of errors.[22]Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.[23]
For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.[21][24]The iterative decoding algorithm works best when there are not short cycles in thefactor graphthat represents the decoder; the interleaver is chosen to avoid short cycles.
Interleaver designs include:
In multi-carriercommunication systems, interleaving across carriers may be employed to provide frequencydiversity, e.g., to mitigatefrequency-selective fadingor narrowband interference.[28]
Transmission without interleaving:
Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword cccc is altered in one bit and can be corrected, but the codeword dddd is altered in three bits, so either it cannot be decoded at all or it might bedecoded incorrectly.
With interleaving:
In each of the codewords "aaaa", "eeee", "ffff", and "gggg", only one bit is altered, so one-bit error-correcting code will decode everything correctly.
Transmission without interleaving:
The term "AnExample" ends up mostly unintelligible and difficult to correct.
With interleaving:
No word is completely lost and the missing letters can be recovered with minimal guesswork.
Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.[29]Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver[citation needed]. An example of such an algorithm is based onneural network[30]structures.
Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: theCloud Radio Access Networks (C-RAN)in aSoftware-defined radio (SDR)context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system.
In this context, there are various available Open-source software listed below (non exhaustive).
|
https://en.wikipedia.org/wiki/Error-correcting_code
|
AViterbi decoderuses theViterbi algorithmfor decoding a bitstream that has been
encoded using aconvolutional codeortrellis code.
There are other algorithms for decoding a convolutionally encoded stream (for example, theFano algorithm). The Viterbi algorithm is the most resource-consuming, but it does themaximum likelihooddecoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice.
Viterbi decoding was developed byAndrew J. Viterbiand published in the paperViterbi, A. (April 1967). "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm".IEEE Transactions on Information Theory.13(2):260–269.doi:10.1109/tit.1967.1054010.
There are both hardware (in modems) and software implementations of a Viterbi decoder.
Viterbi decoding is used in theiterative Viterbi decodingalgorithm.
A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks:
A branch metric unit's function is to calculatebranch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol.
There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and aHamming distanceis used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about thereliabilityof each received symbol. For instance, in a 3-bit encoding, thisreliabilityinformation can be encoded as follows:
Of course, it is not the only way to encode reliability data.
ThesquaredEuclidean distanceis used as a metric for soft decision decoders.
A path metric unit summarizes branch metrics to get metrics for2K−1{\displaystyle 2^{K-1}}paths, where K is the constraint length of the code, one of which can eventually be chosen asoptimal. Every clock it makes2K−1{\displaystyle 2^{K-1}}decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit.
The core elements of a PMU areACS (Add-Compare-Select)units. The way in which they are connected between themselves is defined by a specific code'strellis diagram.
Since branch metrics are always≥0{\displaystyle \geq 0}, there must be an additional circuit (not shown on the image) preventing metric counters from overflow. An alternate method that eliminates the need to monitor the path metric growth is to allow the path metrics to "roll over"; to use this method it is necessary to make sure the path metric accumulators contain enough bits to prevent the "best" and "worst" values from coming within 2(n-1)of each other. The compare circuit is essentially unchanged.
It is possible to monitor the noise level on the incoming bit stream by monitoring the rate of growth of the "best" path metric. A simpler way to do this is to monitor a single location or "state" and watch it pass "upward" through say four discrete levels within the range of the accumulator. As it passes upward through each of these thresholds, a counter is incremented that reflects the "noise" present on the incoming signal.
Back-trace unit restores an (almost) maximum-likelihood path from the decisions made by PMU. Since it does it in inverse direction, a viterbi decoder comprises a FILO (first-in-last-out) buffer to reconstruct a correct order.
Note that the implementation shown on the image requires double frequency. There are some tricks that eliminate this requirement.
In order to fully exploit benefits of soft decision decoding, one needs to quantize the input signal properly. The optimal quantization zone width is defined by the following formula:
whereN0{\displaystyle N_{0}}is anoise power spectral density, andkis a number of bits for soft decision.
The squarednorm(ℓ2{\displaystyle \ell _{2}}) distance between the received and the actual symbols in the code alphabet may be further simplified into a linear sum/difference form, which makes it less computationally intensive.
Consider a 1/2convolutional code, which generates 2 bits (00,01,10or11) for every input bit (1or0). TheseReturn-to-Zerosignals are translated into aNon-Return-to-Zeroform shown alongside.
Each received symbol may be represented in vector form asvr= {r0, r1}, where r0and r1are soft decision values, whose magnitudes signify thejoint reliabilityof the received vector,vr.
Every symbol in the code alphabet may, likewise, be represented by the vectorvi= {±1, ±1}.
The actual computation of the Euclidean distance metric is:
Each square term is a normed distance, depicting theenergyof the symbol. For ex., theenergyof the symbolvi= {±1, ±1} may be computed as
Thus, the energy term of all symbols in the code alphabet is constant (at (normalized) value 2).
TheAdd-Compare-Select(ACS) operation compares the metric distance between the received symbol||vr||and any 2 symbols in the code alphabet whose paths merge at a node in the corresponding trellis,||vi(0)||and||vi(1)||. This is equivalent to comparing
and
But, from above we know that theenergyofviis constant (equal to (normalized) value of 2), and theenergyofvris the same in both cases. This reduces the comparison to a minima function between the 2 (middle)dot productterms,
since aminoperation on negative numbers may be interpreted as an equivalentmaxoperation on positive quantities.
Eachdot productterm may be expanded as
where, the signs of each term depend on symbols,vi(0)andvi(1), being compared. Thus, thesquaredEuclidean metric distance calculation to compute thebranch metricmay be performed with a simple add/subtract operation.
The general approach to traceback is to accumulate path metrics for up to five times the constraint length (5 (K- 1)), find the node with the largest accumulated cost, and begin traceback from this node.
The commonly used rule of thumb of a truncation depth of five times the memory (constraint lengthK-1) of a convolutional code is accurate only for rate 1/2 codes. For an arbitrary rate, an accurate rule of thumb is 2.5(K- 1)/(1−r) whereris the code rate.[1]
However, computing the node which has accumulated the largest cost (either the largest or smallest integral path metric) involves finding themaximaorminimaof several (usually 2K-1) numbers, which may be time consuming when implemented on embedded hardware systems.
Most communication systems employ Viterbi decoding involving data packets of fixed sizes, with a fixedbit/bytepattern either at the beginning or/and at the end of the data packet. By using the knownbit/bytepattern as reference, the start node may be set to a fixed value, thereby obtaining a perfect Maximum Likelihood Path during traceback.
A physical implementation of a Viterbi decoder will not yield anexactmaximum-likelihood stream due toquantizationof the input signal, branch and path metrics, and finitetraceback length. Practical implementations do approach within 1 dB of the ideal.
The output of a Viterbi decoder, when decoding a message damaged by an additive Gaussian channel, has errors grouped in error bursts.[2][3]Single-error-correcting codesalone can't correct such bursts, so either theconvolutional codeand the Viterbi decoder must be designed powerful enough to drive down errors to an acceptable rate, orburst error-correcting codesmust be used.
A hardware viterbi decoder ofpuncturedcodesis commonly implemented in such a way:
One of the most time-consuming operations is an ACS butterfly, which is usually implemented usingassembly languageand an appropriate instruction set extensions (such asSSE2) to speed up the decoding time.
The Viterbi decoding algorithm is widely used in the following areas:
|
https://en.wikipedia.org/wiki/Viterbi_decoder
|
A*(pronounced "A-star") is agraph traversalandpathfindingalgorithmthat is used in many fields ofcomputer sciencedue to its completeness, optimality, and optimal efficiency.[1]Given aweighted graph, a sourcenodeand a goal node, the algorithm finds theshortest path(with respect to the given weights) from source to goal.
One major practical drawback is itsO(bd){\displaystyle O(b^{d})}space complexitywheredis the depth of the solution (the length of the shortest path) andbis thebranching factor(the maximum number of successors for a state), as it stores all generated nodes in memory. Thus, in practicaltravel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance,[2]as well as by memory-bounded approaches; however, A* is still the best solution in many cases.[3]
Peter Hart,Nils NilssonandBertram Raphaelof Stanford Research Institute (nowSRI International) first published the algorithm in 1968.[4]It can be seen as an extension ofDijkstra's algorithm. A* achieves better performance by usingheuristicsto guide its search.
Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessarytrade-offfor using a specific-goal-directedheuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic.
A* was created as part ofthe Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using theGraph Traverseralgorithm[5]for Shakey's path planning.[6]Graph Traverser is guided by a heuristic functionh(n), the estimated distance from nodento the goal node: it entirely ignoresg(n), the distance from the start node ton. Bertram Raphael suggested using the sum,g(n) +h(n).[7]Peter Hart invented the concepts we now calladmissibilityandconsistencyof heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra.[8]
The original 1968 A* paper[4]contained a theorem stating that no A*-like algorithm[a]could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later[9]claiming that consistency was not required, but this was shown to be false in 1985 in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.[10]
A* is aninformed search algorithm, or abest-first search, meaning that it is formulated in terms ofweighted graphs: starting from a specific startingnodeof a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining atreeof paths originating at the start node and extending those paths one edge at a time until the goal node is reached.
At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes
wherenis the next node on the path,g(n)is the cost of the path from the start node ton, andh(n)is aheuristicfunction that estimates the cost of the cheapest path fromnto the goal. The heuristic function is problem-specific. If the heuristic function isadmissible– meaning that it never overestimates the actual cost to get to the goal – A* is guaranteed to return a least-cost path from start to goal.
Typical implementations of A* use apriority queueto perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as theopen set,fringeorfrontier. At each step of the algorithm, the node with the lowestf(x)value is removed from the queue, thefandgvalues of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowestfvalue out of all fringe nodes) is a goal node.[b]Thefvalue of that goal is then also the cost of the shortest path, sincehat the goal is zero in an admissible heuristic.
The algorithm described so far only gives the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node.
As an example, when searching for the shortest route on a map,h(x)might represent thestraight-line distanceto the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using theTaxicab distanceor theChebyshev distancebecomes better depending on the set of movements available (4-way or 8-way).
If the heuristichsatisfies the additional conditionh(x) ≤d(x,y) +h(y)for every edge(x,y)of the graph (whereddenotes the length of that edge), thenhis calledmonotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to runningDijkstra's algorithmwith thereduced costd'(x,y) =d(x,y) +h(y) −h(x).[11]
The followingpseudocodedescribes the algorithm:
Remark:In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function isadmissiblebut notconsistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will always fail if the node is reached again. The pseudocode implemented here is sometimes called thegraph-searchversion of A*.[12]This is in contrast with the version without the ‘tentative_gScore < gScore[neighbor]’ test to add nodes back to openSet, which is sometimes called thetree-searchversion of A* and require a consistent heuristic to guarantee optimality.
An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point:
Key:green: start; blue: goal; orange: visited
The A* algorithm has real-world applications. In this example, edges are railroads and h(x) is thegreat-circle distance(the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C., and Los Angeles.
There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in aLIFOmanner, A* will behave likedepth-first searchamong equal cost paths (avoiding exploring more than one equally optimal solution).
When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standardbinary heapbased priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with ahash tablethat maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, aFibonacci heapcan perform the same decrease-priority operations in constantamortized time.
Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* whereh(x)=0{\displaystyle h(x)=0}for allx.[13][14]Generaldepth-first searchcan be implemented using A* by considering that there is a global counterCinitialized with a very large value. Every time we process a node we assignCto all of its newly discovered neighbors. After every single assignment, we decrease the counterCby one. Thus the earlier a node is discovered, the higher itsh(x){\displaystyle h(x)}value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including anh(x){\displaystyle h(x)}value at each node.
On finite graphs with non-negative edge weights A* is guaranteed to terminate and iscomplete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero (d(x,y)>ε>0{\textstyle d(x,y)>\varepsilon >0}for some fixedε{\displaystyle \varepsilon }), A* is guaranteed to terminate only if there exists a solution.[1]
A search algorithm is said to beadmissibleif it is guaranteed to return an optimal solution. If the heuristic function used by A* isadmissible, then A* is admissible. An intuitive "proof" of this is as follows:
Call a nodeclosedif it has been visited and is not in the open set. Weclosea node when we remove it from the open set. A basic property of the A* algorithm, which we'll sketch a proof of below, is that whenn{\displaystyle n}is closed,f(n){\displaystyle f(n)}is an optimistic estimate (lower bound) of the true distance from the start to the goal. So when the goal node,g{\displaystyle g}, is closed,f(g){\displaystyle f(g)}is no more than the true distance. On the other hand, it is no less than the true distance, since it is the length of a path to the goal plus a heuristic term.
Now we'll see that whenever a noden{\displaystyle n}is closed,f(n){\displaystyle f(n)}is an optimistic estimate. It is enough to see that whenever the open set is not empty, it has at least one noden{\displaystyle n}on an optimal path to the goal for whichg(n){\displaystyle g(n)}is the true distance from start, since in that caseg(n){\displaystyle g(n)}+h(n){\displaystyle h(n)}underestimates the distance to goal, and therefore so does the smaller value chosen for the closed vertex. LetP{\displaystyle P}be an optimal path from the start to the goal. Letp{\displaystyle p}be the last closed node onP{\displaystyle P}for whichg(p){\displaystyle g(p)}is the true distance from the start to the goal (the start is one such vertex). The next node inP{\displaystyle P}has the correctg{\displaystyle g}value, since it was updated whenp{\displaystyle p}was closed, and it is open since it is not closed.
Algorithm A is optimally efficient with respect to a set of alternative algorithmsAltson a set of problemsPif for every problem P inPand every algorithm A′ inAlts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl.[10]They considered a variety of definitions ofAltsandPin combination with A*'s heuristic being merely admissible or being bothconsistentand admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all "non-pathological" search problems. Roughly speaking, their notion of the non-pathological problem is what we now mean by "up to tie-breaking". This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems.
Optimal efficiency is about thesetof nodes expanded, not thenumberof node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case.[15]In such circumstances, Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches.[16][17]
While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 +ε) times the optimal solution path. This new guarantee is referred to asε-admissible.
There are a number ofε-admissible algorithms:
As a heuristic search algorithm, the performance of A* is heavily influenced by the quality of the heuristic functionh(n){\textstyle h(n)}. If the heuristic closely approximates the true cost to the goal, A* can significantly reduce the number of node expansions. On the other hand, a poor heuristic can lead to many unnecessary expansions.
In the worst case, A* expands all nodesn{\textstyle n}for whichf(n)=g(n)+h(n)≤C∗{\textstyle f(n)=g(n)+h(n)\leq C^{*}}, whereC∗{\textstyle C^{*}}is the cost of the optimal goal node.
Suppose there is a nodeN′{\textstyle N'}in the open list withf(N′)>C∗{\textstyle f(N')>C^{*}}, and it's the next node to be expanded. Since the goal node hasf(goal)=g(goal)+h(goal)=g(goal)=C∗{\textstyle f(goal)=g(goal)+h(goal)=g(goal)=C^{*}}, andf(N′)>C∗{\textstyle f(N')>C^{*}}, the goal node will have a lower f-value and will be expanded beforeN′{\textstyle N'}. Therefore, A* never expands nodes withf(n)>C∗{\textstyle f(n)>C^{*}}.
Assume there exists an optimal algorithm that expands fewer nodes thanC∗{\textstyle C^{*}}in the worst case using the same heuristic. That means there must be some nodeN′{\textstyle N'}such thatf(N′)<C∗{\textstyle f(N')<C^{*}}, yet the algorithm chooses not to expand it.
Now consider a modified graph where a new edge of costε{\textstyle \varepsilon }(withε>0{\textstyle \varepsilon >0}) is added fromN′{\textstyle N'}to the goal. Iff(N′)+ε<C∗{\textstyle f(N')+\varepsilon <C^{*}}, then the new optimal path goes throughN′{\textstyle N'}. However, since the algorithm still avoids expandingN′{\textstyle N'}, it will miss the new optimal path, violating its optimality.
Therefore, no optimal algorithm including A* could expand fewer nodes thanC∗{\textstyle C^{*}}in the worst case.
The worst-case complexity of A* is often described asO(bd){\textstyle O(b^{d})}, whereb{\displaystyle b}is the branching factor andd{\textstyle d}is the depth of the shallowest goal. While this gives a rough intuition, it does not precisely capture the actual behavior of A*.
A more accurate bound considers the number of nodes withf(n)≤C∗{\textstyle f(n)\leq C^{*}}. Ifε{\displaystyle \varepsilon }is the smallest possible difference inf{\textstyle f}-cost between distinct nodes, then A* may expand up to:
This represents both the time and space complexity in the worst case.
Thespace complexityof A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory.[1]In practice, this turns out to be the biggest drawback of the A* search, leading to the development of memory-bounded heuristic searches, such asIterative deepening A*, memory-bounded A*, andSMA*.
A* is often used for the commonpathfindingproblem in applications such as video games, but was originally designed as a general graph traversal algorithm.[4]It finds applications in diverse problems, including the problem ofparsingusingstochastic grammarsinNLP.[27]Other cases include an Informational search with online learning.[28]
What sets A* apart from agreedybest-first search algorithm is that it takes the cost/distance already traveled,g(n), into account.
Some common variants ofDijkstra's algorithmcan be viewed as a special case of A* where the heuristich(n)=0{\displaystyle h(n)=0}for all nodes;[13][14]in turn, both Dijkstra and A* are special cases ofdynamic programming.[29]A* itself is a special case of a generalization ofbranch and bound.[30]
A* is similar tobeam searchexcept that beam search maintains a limit on the numbers of paths that it has to explore.[31]
A* can also be adapted to abidirectional searchalgorithm, but special care needs to be taken for the stopping criterion.[35]
|
https://en.wikipedia.org/wiki/A*_search_algorithm
|
In themathematicaldiscipline ofnumerical linear algebra, amatrix splittingis an expression which represents a givenmatrixas a sum or difference of matrices. Manyiterative methods(for example, for systems ofdifferential equations) depend upon the direct solution of matrix equations involving matrices more general thantridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised byRichard S. Vargain 1960.[1]
We seek to solve thematrix equation
whereAis a givenn×nnon-singularmatrix, andkis a givencolumn vectorwithncomponents. We split the matrixAinto
whereBandCaren×nmatrices. If, for an arbitraryn×nmatrixM,Mhas nonnegative entries, we writeM≥0. IfMhas only positive entries, we writeM>0. Similarly, if the matrixM1−M2has nonnegative entries, we writeM1≥M2.
Definition:A=B−Cis aregular splitting of AifB−1≥0andC≥0.
We assume that matrix equations of the form
wheregis a given column vector, can be solved directly for the vectorx. If (2) represents a regular splitting ofA, then the iterative method
wherex(0)is an arbitrary vector, can be carried out. Equivalently, we write (4) in the form
The matrixD=B−1Chas nonnegative entries if (2) represents a regular splitting ofA.[2]
It can be shown that ifA−1>0, thenρ(D){\displaystyle \rho (\mathbf {D} )}< 1, whereρ(D){\displaystyle \rho (\mathbf {D} )}represents thespectral radiusofD, and thusDis aconvergent matrix. As a consequence, the iterative method (5) is necessarilyconvergent.[3][4]
If, in addition, the splitting (2) is chosen so that the matrixBis adiagonal matrix(with the diagonal entries all non-zero, sinceBmust beinvertible), thenBcan be inverted in linear time (seeTime complexity).
Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrixAare all nonzero, and we express the matrixAas the matrix sum
whereDis the diagonal part ofA, andUandLare respectively strictly upper and lowertriangularn×nmatrices, then we have the following.
TheJacobi methodcan be represented in matrix form as a splitting
TheGauss–Seidel methodcan be represented in matrix form as a splitting
The method ofsuccessive over-relaxationcan be represented in matrix form as a splitting
In equation (1), let
Let us apply the splitting (7) which is used in the Jacobi method: we splitAin such a way thatBconsists ofallof the diagonal elements ofA, andCconsists ofallof the off-diagonal elements ofA, negated. (Of course this is not the only useful way to split a matrix into two matrices.) We have
SinceB−1≥0andC≥0, the splitting (11) is a regular splitting. SinceA−1>0, the spectral radiusρ(D){\displaystyle \rho (\mathbf {D} )}< 1. (The approximateeigenvaluesofDareλi≈−0.4599820,−0.3397859,0.7997679.{\displaystyle \lambda _{i}\approx -0.4599820,-0.3397859,0.7997679.}) Hence, the matrixDis convergent and the method (5) necessarily converges for the problem (10). Note that the diagonal elements ofAare all greater than zero, the off-diagonal elements ofAare all less than zero andAisstrictly diagonally dominant.[11]
The method (5) applied to the problem (10) then takes the form
The exact solution to equation (12) is
The first few iterates for equation (12) are listed in the table below, beginning withx(0)= (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), albeit rather slowly.
As stated above, the Jacobi method (7) is the same as the specific regular splitting (11) demonstrated above.
Since the diagonal entries of the matrixAin problem (10) are all nonzero, we can express the matrixAas the splitting (6), where
We then have
The Gauss–Seidel method (8) applied to the problem (10) takes the form
The first few iterates for equation (15) are listed in the table below, beginning withx(0)= (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), somewhat faster than the Jacobi method described above.
Letω= 1.1. Using the splitting (14) of the matrixAin problem (10) for the successive over-relaxation method, we have
The successive over-relaxation method (9) applied to the problem (10) takes the form
The first few iterates for equation (16) are listed in the table below, beginning withx(0)= (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), slightly faster than the Gauss–Seidel method described above.
|
https://en.wikipedia.org/wiki/Matrix_splitting
|
SPARQL(pronounced "sparkle", arecursive acronym[2]forSPARQL Protocol and RDF Query Language) is anRDF query language—that is, asemanticquery languagefordatabases—able to retrieve and manipulate data stored inResource Description Framework (RDF)format.[3][4]It was made a standard by theRDF Data Access Working Group(DAWG) of theWorld Wide Web Consortium, and is recognized as one of the key technologies of thesemantic web. On 15 January 2008, SPARQL 1.0 was acknowledged byW3Cas an official recommendation,[5][6]and SPARQL 1.1 in March, 2013.[7]
SPARQL allows for a query to consist oftriple patterns,conjunctions,disjunctions, and optionalpatterns.[10]
Implementations for multipleprogramming languagesexist.[11]There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer.[12]In addition, tools exist to translate SPARQL queries to other query languages, for example toSQL[13]and toXQuery.[14]
SPARQL allows users to write queries that follow theRDFspecification of theW3C. Thus, the entire dataset is "subject-predicate-object" triples. Subjects and predicates are always URI identifiers, but objects can be URIs or literal values. This single physical schema of 3 "columns" is hypernormalized in that what would be 1 relational record with (for example) 4 columns is now 4 triples with the subject being repeated over and over, the predicate essentially being the column name, and the object being the column value. Although this seems unwieldy,
the SPARQL syntax offers these features:
1. Subjects and Objects can be used to find the other including transitively.
Below is a set of triples. It should be clear thatex:sw001andex:sw002link toex:sw003, which itself has links:
In SPARQL, the first time a variable is encountered in the expression pipeline, it is populated with result. The second and subsequent times it is seen, it is used as an input. If we assign ("bind") the URIex:sw003to the?targetsvariable, then it drives a
result into?src; this tells us all the things that linktoex:sw003(upstream dependency):
But with a simple switch of the binding variable, the behavior is reversed. This will produce all the things upon whichex:sw003depends (downstream dependency):
Even more attractive is that we can easily instruct SPARQL to transitively follow the path:
Bound variables can therefore also be lists and will be operated upon without complicated syntax. The effect of this is similar to the followingpseudocode:
2. SPARQL expressions are a pipeline
Unlike SQL which has subqueries and CTEs, SPARQL is much more like MongoDB or SPARK. Expressions are evaluated exactly in the order they are declared including filtering and joining of data. The programming model becomes what a SQL statement would be like with multiple WHERE clauses. The combination of list-aware subjects and objects plus a pipeline approach can yield extremely expressive queries spanning many different domains of data. JOIN as used in RDBMS and understanding the dynamics of the JOIN (e.g. what column in what table is suitable to join to another, inner vs. outer, etc.) is not relevant in SPARQL (and in some ways simpler) because objects, if an URI and not a literal, implicity can be usedonlyto find a subject. Here is a more comprehensive example that illustrates the pipeline using some syntax shortcuts.
Unlike relational databases, the object column is heterogeneous: the object data type, if not an URI, is usually implied (or specified in theontology) by thepredicatevalue. Literal nodes carry type information consistent with the underlying XSD namespace including signed and unsigned short and long integers, single and double precision floats, datetime, penny-precise decimal, Boolean, and string. Triple store implementations on traditional relational databases will typically store the value as a string and a fourth column will identify the real type. Polymorphic databases such as MongoDB and SQLite can store the native value directly into the object field.
Thus, SPARQL provides a full set of analytic query operations such asJOIN,SORT,AGGREGATEfor data whoseschemais intrinsically part of the data rather than requiring a separate schema definition. However, schema information (the ontology) is often provided externally, to allow joining of differentdatasetsunambiguously. In addition, SPARQL provides specificgraphtraversal syntax for data that can be thought of as a graph.
The example below demonstrates a simple query that leverages theontologydefinitionfoaf("friend of a friend").
Specifically, the following query returns names and emails of every person in thedataset:
This query joins all of the triples with a matching subject, where the type predicate, "a", is a person (foaf:Person), and the person has one or more names (foaf:name) and mailboxes (foaf:mbox).
For the sake of readability, the author of this query chose to reference the subject using the variable name "?person". Since the first element of the triple is always the subject, the author could have just as easily used any variable name, such as "?subj" or "?x". Whatever name is chosen, it must be the same on each line of the query to signify that the query engine is to join triples with the same subject.
The result of the join is a set of rows –?person,?name,?email. This query returns the?nameand?emailbecause?personis often a complex URI rather than a human-friendly string. Note that any?personmay have multiple mailboxes, so in the returned set, a?namerow may appear multiple times, once for each mailbox, duplicating the?name.
An important consideration in SPARQL is that when lookup conditions are not met in the pipeline for terminal entities like?email, then thewhole row is excluded, unlike SQL where typically a null column is returned. The query above will return only those?personwhere both at least one?nameand at least one?emailcan be found. If a?personhad no email, they would be excluded. To align the output with that expected from an equivalent SQL query, theOPTIONALkeyword is required:
This query can be distributed to multiple SPARQL endpoints (services that accept SPARQL queries and return results), computed, and results gathered, a procedure known asfederated query.
Whether in a federated manner or locally, additional triple definitions in the query could allow joins to different subject types, such as automobiles, to allow simple queries, for example, to return a list of names and emails for people who drive automobiles with a high fuel efficiency.
In the case of queries that read data from the database, the SPARQL language specifies four different query variations for different purposes.
Each of these query forms takes aWHEREblock to restrict the query, although, in the case of theDESCRIBEquery, theWHEREis optional.
SPARQL 1.1 specifies a language for updating the database with several new query forms.[15]
Another SPARQL query example that models the question "What are all the country capitals in Africa?":
Variables are indicated by a?or$prefix. Bindings for?capitaland the?countrywill be returned. When a triple ends with a semicolon, the subject from this triple will implicitly complete the following pair to an entire triple. So for exampleex:isCapitalOf ?yis short for?x ex:isCapitalOf ?y.
The SPARQL query processor will search for sets of triples that match these four triple patterns, binding the variables in the query to the corresponding parts of each triple. Important to note here is the "property orientation" (class matches can be conducted solely through class-attributes or properties – seeDuck typing).
To make queries concise, SPARQL allows the definition of prefixes and baseURIsin a fashion similar toTurtle. In this query, the prefix "ex" stands for “http://example.com/exampleOntology#”.
SPARQL has native dateTime operations as well. Here is a query that will return all pieces of software where the EOL date is greater than or equal to 1000 days from the release date and the release year is 2020 or greater:
GeoSPARQLdefines filter functions forgeographic information system(GIS) queries using well-understood OGC standards (GML,WKT, etc.).
SPARULis another extension to SPARQL. It enables the RDF store to be updated with this declarative query language, by addingINSERTandDELETEmethods.
XSPARQLis an integrated query language combiningXQuerywith SPARQL to query both XML and RDF data sources at once.[16]
Open source, reference SPARQL implementations
SeeList of SPARQL implementationsfor more comprehensive coverage, includingtriplestore,APIs, and other storages that have implemented the SPARQL standard.
|
https://en.wikipedia.org/wiki/SPARQL
|
ThePlatform for Internet Content Selection(PICS) was a specification created byW3Cthat usedmetadatato label webpages to help parents and teachers control what children and students could access on theInternet. The W3CProtocol for Web Description Resourcesproject integrates PICS concepts withRDF. PICS was superseded byPOWDER, which itself is no longer actively developed.[1]PICS often used content labeling from theInternet Content Rating Association, which has also been discontinued by the Family Online Safety Institute's board of directors.[2]An alternative self-rating system, named Voluntary Content Rating,[3]was devised by Solid Oak Software in 2010, in response to the perceived complexity of PICS.[4]
Internet Explorer 3was one of the early web browsers to offer support for PICS, released in 1996.Internet Explorer 5added a feature calledapproved sites, that allowed extra sites to be added to the list in addition to the PICS list when it was being used.[5]
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Platform_for_Internet_Content_Selection
|
Shapes Constraint Language[1](SHACL) is aWorld Wide Web Consortium(W3C) standard language for describingResource Description Framework(RDF) graphs. SHACL has been designed to enhance the semantic and technical interoperability layers ofontologiesexpressed as RDF graphs.[3]
SHACL models are defined in terms of constraints on the content, structure and meaning of a graph. SHACL is a highly expressive language. Among others, it includes features to express conditions that constrain the number of values that a property may have, the type of such values, numeric ranges, string matching patterns, and logical combinations of such constraints. SHACL also includes an extension mechanism to express more complex conditions in languages such asSPARQLandJavaScript. SHACL Rules add inferencing capabilities to SHACL, allowing users to define what new statements can be inferred from existing (asserted) statements.
SHACL lets its users describe shapes of data, targeting where a specific shape applies.
Aproperty shapedescribes characteristics of graph nodes that can be reached via a specific path. A path can be a single predicate (property) or a chain of predicates. A property shape must always specify a path. This is done by usingsh:pathpredicate.
One can think of property shapes that use simple paths as describing values of certain properties e.g., values of anageproperty or values of aworks forproperty.
Complex paths can specify a combination of different predicates in a chain, including the inverse direction, alternative predicates and transitive chains.
Property shapes can be defined as part of a node shape. In this case, a node shape points to property shapes usingsh:propertypredicate. Property shapes can also be "stand-alone" i.e., completely independent from any node shapes.
Anode shapedescribes characteristics of specific graph nodes irrespective of how you get to them. It can, for example, be said that certain graph nodes must be literals or a URIs, etc. It is common to include property shapes into a node shape, effectively defining values of many different properties of a node.
For example, a node shape for an employee may incorporate property shapes forageandworks forproperties.
Aconstraintis a way to describe different characteristics of values. A shape will contain one or more constraint declarations. SHACL provides many pre-built constraint types. For example,sh:datatypeis used to describe the type of literal values e.g., if they are strings or integers or dates.sh:minCountis used to describe the minimum required number of values.sh:lengthis used to describe the number of characters for a value.
Atargetconnects a shape with data it describes. The simplest way to specify a target is to say that a node shape is also a class. This means that its definition is applicable to all members (instances) of a class.
Other ways to define a target of a shape are by:
Target declarations can be included in a node shape or in a property shape. However, when a property shape is a part of a node shape, its own targets are ignored.
SHACL usesrdfs:subClassOfstatements to identify targets. A shape targeting members of a class, also targets members of all its subclasses. In other words, all SHACL definitions for a class are inherited by subclasses.
SHACL enables validation of graphs. A SHACL validation engine takes as input a graph to be validated (called data graph) and a graph containing SHACL shapes declarations (called shapes graph) and produces a validation report, also expressed as a graph. All these graphs can be represented in anyResource Description Framework(RDF)serialization formatsincludingJSON-LDorTurtle.
SHACL is fairly unique in its approach in that it builds-in not only the ability to specify a severity level of validation results, but also the ability to return suggestions on how data may be fixed if the validation result is raised. Built-in levels are Violation, Warning and Info, defaulting to Violation if nosh:severityhas been specified for a shape. Users of SHACL can add other, custom levels of severity. Validation results may also have values for other properties, as described in the specification. For example, the propertysh:resultMessageis designed to communicate additional textual details to users, including recommendations on how data may be fixed to address to validation result. In cases where a constraint does not have any values forsh:messagein the shapes graph the SHACL processor may automatically generate other values forsh:resultMessage. Some SHACL processors (e.g., the one implemented by TopQuadrant) made these suggestions actionable in software, automating their application on user's request.
World Wide Web Consortiumpublished the following SHACL Specifications:
TheSHACL Test Suite and Implementation Report[7]linked to from the SHACL W3C specification lists some open source tools that could be used for SHACL validation as of June 2019. By the end of 2019 many commercial RDF database and framework vendors announced support for at least SHACL Core.
Some of the open source tools listed in the report are:
SHACL Playgroundis a free SHACL validation service implemented in JavaScript.[13]
Eclipse RDF4Jis an open source Java framework by theEclipse Foundationfor processing RDF data, which supports SHACL validation.[14]
SHACL is supported by most RDF Graph technology vendors including Cambridge Semantics (Anzo),[15]Franz (AllegroGraph), Metaphacts,[16]Ontotext (GraphDB),[17]Stardog[18]and TopQuadrant. There is even support in the commercial products that use property graph data model, such asNeo4J.[19]
Levels of implementation may vary. At minimum, vendors support SHACL Core. Some also support SHACL SPARQL for higher expressivity, while others may support SHACL Advanced Features which include rules and functions.
|
https://en.wikipedia.org/wiki/SHACL
|
Acanary trapis a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document.
The term was coined byTom Clancyin his novelPatriot Games,[1][non-primary source needed]although Clancy did not invent the technique. The actual method (usually referred to as abarium meal testin espionage circles) has been used by intelligence agencies for many years. The fictional characterJack Ryandescribes the technique he devised for identifying the sources of leaked classified documents:
Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it.
A refinement of this technique uses a thesaurus program to shuffle through synonyms, thus making every copy of the document unique.[2]
According to the bookSpycatcher[3]byPeter Wright(published in 1987), the technique is standard practice that has been used byMI5(and other intelligence agencies) for many years, under the name "barium meal test", named for themedical procedure. A barium meal test is more sophisticated than a canary trap because it is flexible and may take many different forms. However, the basic premise is to reveal a supposed secret to a suspected enemy (but nobody else) then monitor whether there is evidence of the fake information being utilized by the other side. For example, a suspected double agent could be offered some tempting "bait": e.g., be told that important information was stored at adead dropsite. The fake dead drop site could then be periodically checked for signs of disturbance. If the site showed signs of being disturbed (for instance, in order to copy microfilm stored there), then this would confirm that the suspected enemy really was an enemy, i.e., a double agent.
The technique of embedding significant information in a hidden form in a medium has been used in many ways, which are usually classified according to intent:
Following the troubled production ofStar Trek: The Motion Picturein the late 1970s,Paramount Pictureseffectively replacedGene Roddenberryas producer offurther moviesin thefranchisewithHarve Bennett. Roddenberry was retained as an "executive consultant", due to the high regard the series' fans held him in; while he had little real authority he was still kept involved in the creative process. The fans often complained about particular plot developments proposed for the films, such as the death ofSpockinStar Trek II, that Roddenberry had opposed. So, before any drafts of the screenplay forStar Trek III: The Search for Spockwere circulated, Bennett arranged for each individual copy to have subtle clues distinguishing it from the others. Shortly after Roddenberry opposed the destruction of theEnterpriseat the climax of that film, fans began to complain to Paramount and Bennett. He found that a leaked copy of the script was the one given to Roddenberry, but was unable to do anything about it.[5]
After a series of leaks atTesla Motorsin 2008, CEOElon Muskreportedly sent slightly different versions of an e-mail to each employee in an attempt to reveal potential leakers. The e-mail was disguised as a request to employees to sign a newnon-disclosure agreement. The plan was undermined when the company's general counsel forwarded his own unique version of the e-mail with the attached agreement. As a result, Musk's scheme was realized by employees who now had a safe copy to leak.[6]
In October 2019, British celebrityColeen Rooneyused a barium meal test to identify who was leaking information from her privateInstagramstories to tabloid newspaperThe Sunby posting fake stories which were blocked to all but one account. When these details appeared in the press, she publicly identified the leaks as coming from the account ofRebekah Vardy, wife of soccer playerJamie Vardy. The subsequent libel trial became known as theWagatha Christiecase.[7][8]
In December 2020,Andrew Lewer, a Member of Parliament andParliamentary Private Secretaryin the UK government, was fired after a canary trap in the form of a letter reminding staff not to leak was published on the websiteGuido Fawkes.[9]
|
https://en.wikipedia.org/wiki/Canary_trap
|
Honeypotsare security devices whose value lie in being probed and compromised. Traditional honeypots are servers (or devices that expose server services) that wait passively to be attacked.Client Honeypotsare active security devices in search of malicious servers that attack clients. The client honeypot poses as a client and interacts with the server to examine whether an attack has occurred. Often the focus of client honeypots is on web browsers, but any client that interacts with servers can be part of a client honeypot (for exampleftp, email,ssh, etc.).
There are several terms that are used to describe client honeypots. Besides client honeypot, which is the generic classification, honeyclient is the other term that is generally used and accepted. However, there is a subtlety here, as "honeyclient" is actually ahomographthat could also refer to the first known open source client honeypot implementation (see below), although this should be clear from the context.
A client honeypot is composed of three components. The first component, a queuer, is responsible for creating a list of servers for the client to visit. This list can be created, for example, through crawling. The second component is the client itself, which is able to make a requests to servers identified by the queuer. After the interaction with the server has taken place, the third component, an analysis engine, is responsible for determining whether an attack has taken place on the client honeypot.
In addition to these components, client honeypots are usually equipped with some sort of containment strategy to prevent successful attacks from spreading beyond the client honeypot. This is usually achieved through the use of firewalls and virtual machine sandboxes.
Analogous to traditional server honeypots, client honeypots are mainly classified by their interaction level: high or low; which denotes the level of functional interaction the server can utilize on the client honeypot. In addition to this there are also newly hybrid approaches which denotes the usage of both high and low interaction detection techniques.
High interaction client honeypots are fully functional systems comparable to real systems with real clients. As such, no functional limitations (besides the containment strategy) exist on high interaction client honeypots. Attacks on high interaction client honeypots are detected via inspection of the state of the system after a server has been interacted with. The detection of changes to the client honeypot may indicate the occurrence of an attack against that has exploited a vulnerability of the client. An example of such a change is the presence of a new or altered file.
High interaction client honeypots are very effective at detecting unknown attacks on clients. However, the tradeoff for this accuracy is a performance hit from the amount of system state that has to be monitored to make an attack assessment. Also, this detection mechanism is prone to various forms of evasion by the exploit. For example, an attack could delay the exploit from immediately triggering (time bombs) or could trigger upon a particular set of conditions or actions (logic bombs). Since no immediate, detectable state change occurred, the client honeypot is likely to incorrectly classify the server as safe even though it did successfully perform its attack on the client. Finally, if the client honeypots are running in virtual machines, then an exploit may try to detect the presence of the virtual environment and cease from triggering or behave differently.
Capture[1]is a high interaction client honeypot developed by researchers at Victoria University of Wellington, NZ. Capture differs from existing client honeypots in various ways. First, it is designed to be fast. State changes are being detected using an event based model allowing to react to state changes as they occur. Second, Capture is designed to be scalable. A central Capture server is able to control numerous clients across a network. Third, Capture is supposed to be a framework that allows to utilize different clients. The initial version of Capture supports Internet Explorer, but the current version supports all major browsers (Internet Explorer, Firefox, Opera, Safari) as well as other HTTP aware client applications, such as office applications and media players.
HoneyClient[2]is a web browser based (IE/FireFox) high interaction client honeypot designed by Kathy Wang in 2004 and subsequently developed atMITRE. It was the first open source client honeypot and is a mix of Perl, C++, and Ruby. HoneyClient is state-based and detects attacks on Windows clients by monitoring files, process events, and registry entries. It has integrated the Capture-HPC real-time integrity checker to perform this detection. HoneyClient also contains a crawler, so it can be seeded with a list of initial URLs from which to start and can then continue to traverse web sites in search of client-side malware.
HoneyMonkey[3]is a web browser based (IE) high interaction client honeypot implemented by Microsoft in 2005. It is not available for download. HoneyMonkey is state based and detects attacks on clients by monitoring files, registry, and processes. A unique characteristic of HoneyMonkey is its layered approach to interacting with servers in order to identify zero-day exploits. HoneyMonkey initially crawls the web with a vulnerable configuration. Once an attack has been identified, the server is reexamined with a fully patched configuration. If the attack is still detected, one can conclude that the attack utilizes an exploit for which no patch has been publicly released yet and therefore is quite dangerous.
Shelia[4]is a high interaction client honeypot developed by Joan Robert Rocaspana at Vrije Universiteit Amsterdam. It integrates with an email reader and processes each email it receives (URLs & attachments). Depending on the type of URL or attachment received, it opens a different client application (e.g. browser, office application, etc.) It monitors whether executable instructions are executed in data area of memory (which would indicate a buffer overflow exploit has been triggered). With such an approach, SHELIA is not only able to detect exploits, but is able to actually ward off exploits from triggering.
The Spycrawler[5]developed at the University of Washington is yet another browser based (Mozilla) high interaction client honeypot developed by Moshchuk et al. in 2005. This client honeypot is not available for download. The Spycrawler is state based and detects attacks on clients by monitoring files, processes, registry, and browser crashes. Spycrawlers detection mechanism is event based. Further, it increases the passage of time of the virtual machine the Spycrawler is operating in to overcome (or rather reduce the impact of) time bombs.
WEF[6]is an implementation of an automatic drive-by-download – detection in a virtualized environment, developed by Thomas Müller, Benjamin Mack and Mehmet Arziman, three students from the Hochschule der Medien (HdM), Stuttgart during the summer term in 2006. WEF can be used as an active HoneyNet with a complete virtualization architecture underneath for rollbacks of compromised virtualized machines.
Low interaction client honeypots differ from high interaction client honeypots in that they do not utilize an entire real system, but rather use lightweight or simulated clients to interact with the server. (in the browser world, they are similar to web crawlers). Responses from servers are examined directly to assess whether an attack has taken place. This could be done, for example, by examining the response for the presence of malicious strings.
Low interaction client honeypots are easier to deploy and operate than high interaction client honeypots and also perform better. However, they are likely to have a lower detection rate since attacks have to be known to the client honeypot in order for it to detect them; new attacks are likely to go unnoticed. They also suffer from the problem of evasion by exploits, which may be exacerbated due to their simplicity, thus making it easier for an exploit to detect the presence of the client honeypot.
HoneyC[7]is a low interaction client honeypot developed at Victoria University of Wellington by Christian Seifert in 2006. HoneyC is a platform independent open source framework written in Ruby. It currently concentrates driving a web browser simulator to interact with servers. Malicious servers are detected by statically examining the web server's response for malicious strings through the usage of Snort signatures.
Monkey-Spider[8]is a low-interaction client honeypot initially developed at the University of Mannheim by Ali Ikinci. Monkey-Spider is a crawler based client honeypot initially utilizing anti-virus solutions to detect malware. It is claimed to be fast and expandable with other detection mechanisms. The work has started as a diploma thesis and is continued and released as Free Software under theGPL.
PhoneyC[9]is a low-interaction client developed by Jose Nazario. PhoneyC mimics legitimate web browsers and can understand dynamic content by de-obfuscating malicious content for detection. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques.
SpyBye[10]is a low interaction client honeypot developed byNiels Provos. SpyBye allows a web master to determine whether a web site is malicious by a set of heuristics and scanning of content against the ClamAV engine.
Thug[11]is a low-interaction client honeypot developed by Angelo Dell'Aera. Thug emulates the behaviour of a web browser and is focused on detection of malicious web pages. The tool uses Google V8 Javascript engine and implements its own Document Object Model (DOM). The most important and unique features of Thug are: the ActiveX controls handling module (vulnerability module), and static + dynamic
analysis capabilities (using Abstract Syntax Tree and Libemu shellcode analyser). Thug is written in Python under GNU General Public License.
YALIH (Yet Another Low Interaction Honeyclient)[12]is a low Interaction Client honeypot developed by Masood Mansoori from the honeynet chapter of the Victoria University of Wellington, New Zealand and designed to detect malicious websites through signature and pattern matching techniques. YALIH has the capability to collect suspicious URLs from malicious website databases, Bing API, inbox and SPAM folder through POP3 and IMAP protocol. It can perform Javascript extraction, de-obfuscation and de-minification of scripts embedded within a website and can emulate referrer, browser agents and handle redirection, cookies and sessions. Its visitor agent is capable of fetching a website from multiple locations to bypass geo-location and IP cloaking attacks. YALIH can also generate automated signatures to detect variations of an attack. YALIH is available as an open source project.
miniC[13]is a low interaction client honeypot based on wget retriever and Yara engine. It is designed to be light, fast and suitable for retrieval of a large number of websites. miniC allows to set and simulate referrer, user-agent, accept_language and few other variables. miniC was designed at New Zealand Honeynet chapter of the Victoria University of Wellington.
Hybrid client honeypots combine both low and high interaction client honeypots to gain from the advantages of both approaches.
The HoneySpider[14]network is a hybrid client honeypot developed as a joint venture betweenNASK/CERT Polska,GOVCERT.NL[nl][1]andSURFnet.[2]The projects goal is to develop a complete client honeypot system, based on existing client honeypot solutions and a crawler specially for the bulk processing of URLs.
|
https://en.wikipedia.org/wiki/Client_honeypot
|
Cowrieis a medium interactionSSHandTelnethoneypotdesigned to logbrute force attacksandshell interactionperformed by an attacker. Cowrie also functions as an SSH and telnetproxyto observe attacker behavior to another system. Cowrie was developed fromKippo.
Cowrie has been referenced in published papers.[1][2]The Book "Hands-On Ethical Hacking and Network Defense" includes Cowrie in a list of 5 commercial honeypots.[3]
|
https://en.wikipedia.org/wiki/Cowrie_(honeypot)
|
HoneyMonkey, short forStrider HoneyMonkey Exploit Detection System, is aMicrosoft Researchhoneypot. The implementation uses a network of computers tocrawltheWorld Wide Websearching forwebsitesthat usebrowser exploitsto installmalwareon the HoneyMonkey computer. A snapshot of the memory, executables and registry of the honeypot computer is recorded before crawling a site. After visiting the site, the state of memory, executables, and registry is recorded and compared to the previous snapshot. The changes are analyzed to determine if the visited site installed any malware onto the client honeypot computer.[1][2]
HoneyMonkey is based on the honeypot concept, with the difference that it actively seeks websites that try to exploit it. The term was coined by Microsoft Research in 2005. With honeymonkeys it is possible to find opensecurity holesthat are not yet publicly known but are being exploited by attackers.
A single HoneyMonkey is an automated program that tries to mimic the action of a user surfing the net. A series of HoneyMonkeys are run onvirtual machinesrunningWindows XP, at various levels of patching — some are fully patched, some fully vulnerable, and others in between these two extremes. The HoneyMonkey program records every read or write of the file system and registry, thus keeping a log of what data was collected by the web-site and what software was installed by it. Once the program leaves a site, this log is analyzed to determine if any malware has been loaded. In such cases, the log of actions is sent for further manual analysis to an external controller program, which logs the exploit data and restarts the virtual machine to allow it to crawl other sites starting in a known uninfected state.
Out of the 10 billion plus web pages, there are many legitimate sites that do not use exploit browser vulnerabilities, and to start crawling from most of these sites would be a waste of resources. An initial list was therefore manually created that listed sites known to use browser vulnerabilities to compromise visiting systems with malware. The HoneyMonkey system then follows links from exploit sites, as they had higher probability of leading to other exploit sites. The HoneyMonkey system also records how many links point to an exploit site thereby giving a statistical indication of how easily an exploit site is reached.
HoneyMonkey uses ablack boxsystem to detect exploits, i.e., it does not use a signature of browser exploits to detect exploits. A Monkey Program, a single instance of the HoneyMonkey project, launchesInternet Explorerto visit a site. It also records all registry and file read or write operations. The monkey does not allow pop-ups, nor does it allow installation of software. Any read or write that happens out of Internet Explorer's temporary folder therefore must have used browser exploits. These are then analyzed by malware detection programs and then manually analyzed. The monkey program then restarts the virtual machine to crawl another site in a fresh state.
|
https://en.wikipedia.org/wiki/HoneyMonkey
|
Honeytokensare fictitious words or records that are added to legitimatedatabases. They allow administrators to track data in situations they wouldn't normally be able to track, such ascloud-based networks.[1]If data is stolen, honey tokens allow administrators to identify who it was stolen from or how it was leaked. If there are three locations for medical records, different honey tokens in the form of fake medical records could be added to each location. Different honeytokens would be in each set of records.[2]
The uniqueness of Honeytokens is the ability to use it as anintrusion-detection system(IDS), as it proactively works to find suspicious activity within a computer network, alerting the system administrator to things that would otherwise go unnoticed. Along with its practice in organizations, Honeytokens provides drastic improvements to network security asFirewallsalone only can look outwardly to prevent incoming threats while Honeytokens look inwardly to see threats that may have slipped by a firewall.[3]This is one case where they go beyond merely ensuring integrity, and with some reactive security mechanisms, may prevent the malicious activity, e.g. by dropping allpacketscontaining the honeytoken at the router. However, such mechanisms have pitfalls because they might cause serious problems if the honeytoken was poorly chosen and appeared in otherwise legitimate network traffic, which was then dropped.
In the field ofcomputer security,honeytokensarehoneypotsthat are not computer systems. Their value lies not in their use, but in their abuse. As such, they are a generalization of such ideas as the honeypot and the canary values often used instack protectionschemes. Honeytokens do not necessarily prevent any tampering with the data, but instead give theadministratora further measure of confidence in the data integrity.
The term was first coined byAugusto Paes de Barrosin 2003.[4][5]
Honeytokens can exist in many forms, from a dead, fake account to a database entry that would only be selected by malicious queries, making the concept ideally suited to ensuring data integrity. A particular example of a honeytoken is a fakeemail addressused to track if a mailing list has been stolen.[6][7]
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Honeytoken
|
Anetwork telescope(also known as apacket telescope,[1]darknet,Internet motion sensororblack hole)[2][3][4]is anInternetsystem that allows one to observe different large-scale events taking place on the Internet. The basic idea is to observe traffic targeting the dark (unused) address-space of the network. Since all traffic to these addresses is suspicious, one can gain information about possible network attacks (random scanning worms, andDDoS backscatter) as well as other misconfigurations by observing it.
The resolution of the Internet telescope is dependent on the number ofIP addressesit monitors. For example, a large Internet telescope that monitors traffic to 16,777,216 addresses (the/8Internet telescope inIPv4), has a higher probability of observing a relatively small event than a smaller telescope that monitors 65,536 addresses (a/16Internet telescope).
The naming comes from an analogy tooptical telescopes, where a larger physical size allows morephotonsto be observed.[5]
A variant of a network telescope is a sparse darknet, or greynet, consisting of a region ofIP addressspace that is sparsely populated with "darknet" addresses interspersed with active (or "lit") IP addresses.[2]These include a greynet assembled from 210,000 unused IP addresses mainly located in Japan.[6]
Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Network_telescope
|
Operation Trust(Russian:операция "Трест",romanized:operatsiya "Trest")[1]was acounterintelligenceoperation of theState Political Directorate(GPU) of theSoviet Union. The operation, which was set up by GPU's predecessorCheka, ran from 1921 to 1927,[2]set up a fake anti-Bolshevikresistance organization, "Monarchist Union of Central Russia", MUCR (Монархическое объединение Центральной России,МОЦР), in order to help theOGPUidentify real monarchists and anti-Bolsheviks.[3]The createdfront companywas called theMoscow Municipal Credit Association.[4]
The head of the MUCR was Alexander Yakushev, a formerbureaucratof the Ministry of Communications ofImperial Russia, who after theRussian Revolutionjoined thePeople's Commissariat of Foreign Trade, when the Soviets began to allow the former specialists (called "spetsy",Russian:спецы) to resume the positions of their expertise. This position allowed him to travel abroad and contactRussian emigrants. Yakushev was arrested for his contacts with the exiled White movement. In the same year of his arrest, he was recruited by the Soviet secret police byArtur Artuzov.
MUCR kept the monarchist generalAlexander Kutepovfrom active actions, as he was convinced to wait for the development of internal anti-Bolshevik forces. Kutepov had previously believed in militant action as a solution to the Soviet occupation, and had formed the "combat organization", a militant splinter from theRussian All-Military Union(Russian:Русский Обще-Воинский Союз,Russkiy Obshche-Voinskiy Soyuz) led by General BaronPyotr Nikolayevich Wrangel.[5]Kutepov also created theInner Lineas a counter-intelligence organization to prevent Bolshevik penetrations. It caused theChekasome problems but was not overly successful.
Among the successes of Trust was the luring ofBoris SavinkovandSidney Reillyinto theSoviet Union, where they were captured.
The Soviets did not organize Trust from scratch. TheWhite Armyhad leftsleeper agents, and there were also Royalist Russians who did not leave after the Civil War. These people cooperated to the point of having a loose organizational structure. When the OGPU discovered them, they did not liquidate all of them, but manoeuvred into creating a shell organization for their own use.
Still another episode of the operation was an "illegal" trip (in fact, monitored by OGPU) of a notable émigré,Vasily Shulgin, into the Soviet Union. After his return he published a bookThree Capitalswith his impressions. In the book he wrote, in part, that contrary to his expectations, Russia was reviving, and theBolshevikswould probably be removed from power.
In 1993, a Western historian who was granted limited access to the Trust files,John Costello, reported that they comprised thirty-seven volumes and were such a bewildering welter of double-agents, changed code names, and interlocking deception operations with "the complexity of a symphonic score" that Russian historians from the Intelligence Service had difficulty separating fact from fantasy. The book in which this was written, was co-authored by ex-KGB spokesman Oleg Tsarev.[6]
DefectorVasili Mitrokhinreported that the Trust files were not housed at theSVRoffices inYasenevo, but were kept in the special archival collections (spetsfondi) of theFSBat theLubyanka.
In 1967, a Soviet adventure TV seriesOperation Trust(Операция "Трест") was created.[7]
In the 1920s and 1930s, the Soviet Union also pursued multiple "Trest-like" deception operations inEast Asia, including "Organizator", "Shogun", "Dreamers" and "Maki Mirage" all againstJapan. Like "Trest", they involved the control of fake anti-Soviet operations to lure rivals.[8]
|
https://en.wikipedia.org/wiki/Operation_Trust
|
Atarpitis a service on acomputer system(usually aserver) that purposely delays incoming connections. The technique was developed as a defense againstspamandcomputer worms. The idea is thatnetworkabuses such as spamming or broad scanning are less effective, and therefore less attractive, if they take too long. The concept is analogous with atar pit, in which animals can get bogged down and slowly sink under the surface, like in aswamp.
By 2000, SMTP e-mail servers, such asPostfix, had rate limiting options that were referred to as tarpitting by the community.[1][2]In 2001, following theCode Red worm,Tom Listondeveloped a network tarpitting programLaBrea.[3][4]It could use a single computer to operate a tarpit on all unused addresses in a network.
One of the possible avenues that were considered to battle bulk-spam at one time, was to mandate a small fee for every submitted mail. By introducing such artificial cost, with negligible impact on legitimate use as long as the fee is small enough, automated mass-scale spam would instantly become unattractive. Tarpitting could be seen as a similar (but technically much less complex) approach, where the cost for the spammer would be measured in terms of time and efficiency rather than money.[5]
Authentication procedures increase response times as users attempt invalid passwords. SMTP authentication is no exception. However, server-to-server SMTP transfers, which is where spam is injected, require no authentication. Various methods have been discussed and implemented for SMTP tarpits, systems that plug into theMail Transfer Agent(MTA, i.e. the mail server software) or sit in front of it as a proxy.[citation needed]
One method increases transfer time for all mails by a few seconds by delaying the initial greeting message ("greet delay"). The idea is that it will not matter if a legitimate mail takes a little longer to deliver, but due to the high volume, it will make a difference for spammers. The downside of this is that mailing lists and other legitimate mass-mailings will have to be explicitlywhitelistedor they will suffer, too.[citation needed]
Some email systems, such assendmail8.13+, implement a stronger form of greet delay. This form pauses when the connection is first established and listens for traffic. If it detects any traffic prior to its own greeting (in violation of RFC 2821) it closes the connection. Since many spammers do not write their SMTP implementations to the specification, this can reduce the number of incoming spam messages.[citation needed]
Another method is to delay only known spammers, e.g. by using ablacklist(seeSpamming,DNSBL).OpenBSDhas integrated this method into their core system since OpenBSD 3.3,[6]with a special-purpose daemon (spamd) and functionality in the firewall (pf) to redirect known spammers to this tarpit.
MS Exchange can tarpit senders who send to an invalid address. Exchange can do this because the SMTP connector is connected to the authentication system.[citation needed]
A more subtle idea isgreylisting, which, in simple terms, rejects the first connection attempt from any previously unseen IP address. The assumption is that most spammers make only one connection attempt (or a few attempts over a short period of time) to send each message, whereas legitimate mail delivery systems will keep retrying over a longer period. After they retry, they will eventually be allowed in without any further impediments.[citation needed]
Finally, a more elaborate method tries to glue tarpits and filtering software together, by filtering e-mail in realtime, while it is being transmitted, and adding delays to the communication in response to the filter's "spam likeliness" indicator. For example, the spam filter would make a "guess" after each line or after every x bytes received as to how likely this message is going to be spam. The more likely this is, the more the MTA will delay the transmission.[citation needed]
SMTPconsists of requests, which are mostly four-letter words such as MAIL, and replies, which are (minimally) three-digit numbers. In the last line of the reply, the number is followed by a space; in the preceding lines it is followed by a hyphen. Thus, on determining that a message being attempted to send is spam, a mail server can reply:
The tarpit waits fifteen or more seconds between lines (long delays are allowed in SMTP, as humans sometimes send mail manually to test mail servers). This ties up the SMTP sending process on the spammer's computer so as to limit the amount of spam it can send.[citation needed]
A machine running the LaBrea tarpit listens forAddress Resolution Protocolrequests that go unanswered (indicating unused addresses), then replies to those requests, receives the initialSYN packetof the scanner and sends aSYN/ACKin response.[4]It does not open a socket or prepare a connection, in fact it can forget all about the connection after sending the SYN/ACK. However, the remote site sends its ACK (which gets ignored) and believes the 3-way-handshake to be complete. Then it starts to send data, which never reaches a destination. The connection will time out after a while, but since the system believes it is dealing with a live (established) connection, it is conservative in timing it out and will instead try to retransmit, back-off, retransmit, etc. for quite a while.
The Linux kernel can now be patched to allow tarpitting of incoming connections instead of the more usual dropping of packets. This is implemented iniptablesby the addition of a TARPIT target.[7]The same packet inspection and matching features can be applied to tarpit targets as are applied to other targets.
A server can determine that a given mail message is spam, e.g. because it was addressed to aspam trap, or after trusted users' reports. The server may decide that the IP address responsible for submitting the message deserves tarpitting. Cross-checking against availableDNSBLscan help to avoid including innocentforwardersin the tarpit database. Adaemonexploiting Linuxlibipqcan then check the remote address of incoming SMTP connections against that database. SpamCannibal is a GPL software designed around this idea;[8]Stockadeis a similar project implemented using FreeBSDipfirewall.
One advantage of tarpitting at the IP level is that regular TCP connections handled by an MTA arestateful. That is, although the MTA doesn't use much CPU while it sleeps, it still uses the amount of memory required to hold the state of each connection. On the opposite, LaBrea-style tarpitting isstateless, thus gaining the advantage of a reduced cost against the spammer's box. However, making use ofbotnets, spammers can externalize most of their computer-resource costs.[citation needed]
Tarpits can also be used to trap artificial intelligencescrapers. In this model, an endpoint is set up that servesincoherent text generated with a Markov chainto AI scrapers topoison their dataset. The URL is blacklisted inrobots.txtto not catch legitimate scrapers. One such software is Nepenthes.[9]
It is known that a tarpitted connection may generate a significant amount of traffic towards the receiver, because the sender considers the connection as established and tries to send (and then retransmit) actual data. In practice, given current average computer botnet size, a more reasonable solution will be to drop suspicious traffic completely, without tarpitting. This way, only TCP SYN segments will be retransmitted, not the whole HTTP or HTTPS requests.[10]
As well as MS Exchange, there have been two other successful commercial implementations of the tar pit idea. The first was developed byTurnTide, a Philadelphia-based startup company, which was acquired bySymantecin 2004 for $28 million in cash.[11]TheTurnTide Anti Spam Routercontains a modifiedLinuxkernel which allows it to play various tricks withTCPtraffic, such as varying theTCPwindow size. By grouping various email senders into different traffic classes and limiting the bandwidth for each class, the amount of abusive traffic is reduced - particularly when the abusive traffic is coming from single sources which are easily identified by their high traffic volume.
After the Symantec acquisition, a Canadian startup company calledMailChannelsreleased their "Traffic Control" software, which uses a slightly different approach to achieve similar results. Traffic Control is a semi-realtimeSMTP proxy. Unlike the TurnTide appliance, which appliestraffic shapingat the network layer, Traffic Control applies traffic shaping to individual senders at the application layer. This approach results in a somewhat more effective handling of spam traffic originating frombotnetsbecause it allows the software to slow traffic from individual spam zombies, rather than requiring zombie traffic to be aggregated into a class.[citation needed]
|
https://en.wikipedia.org/wiki/Tarpit_(networking)
|
In general,bootstrappingusually refers to a self-starting process that is supposed to continue or grow without external input. Many analytical techniques are often called bootstrap methods in reference to their self-starting or self-supporting implementation, such asbootstrapping (statistics),bootstrapping (finance), orbootstrapping (linguistics).
Tallbootsmay have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or aboot hooktool to help pull the boots on. Thesaying"topull oneself up by one's bootstraps"[1]was already in use during the 19th century as an example of an impossible task. Theidiomdates at least to 1834, when it appeared in theWorkingman's Advocate: "It is conjectured that Mr. Murphee will now be enabled to hand himself over the Cumberland river or a barn yard fence by the straps of his boots."[2]In 1860 it appeared in a comment aboutphilosophy of mind: "The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps."[3]Bootstrap as a metaphor, meaning to better oneself by one's own unaided efforts, was in use in 1922.[4]This metaphor spawned additional metaphors for a series of self-sustaining processes that proceed without external help.[5]
The term is sometimes attributed to a story inRudolf Erich Raspe'sThe Surprising Adventures of Baron Munchausen, but in that storyBaron Munchausenpulls himself (and his horse) out of a swamp by his hair (specifically, his pigtail), not by his bootstraps – and no explicit reference to bootstraps has been found elsewhere in the various versions of the Munchausen tales.[2]
Originally meant to attempt something ludicrously far-fetched or even impossible, the phrase "Pull yourself up by your bootstraps!" has since been utilized as a narrative for economic mobility or a cure for depression. That idea is believed to have been popularized by American writerHoratio Algerin the 19th century.[6]To request that someone "bootstrap" is to suggest that they might overcome great difficulty by sheer force of will.[7]
Critics have observed that the phrase is used to portray unfair situations as far more meritocratic than they really are.[8][9][7]A 2009 study found that 77% of Americans believe that wealth is often the result of hard work.[10]Various studies have found that the main predictor of future wealth is not IQ or hard work, but initial wealth.[7][11]
Incomputer technology, the termbootstrappingrefers to languagecompilersthat are able to be coded in the same language. (For example, a C compiler is now written in the C language. Once the basic compiler is written, improvements can be iteratively made, thus "pulling the language up by its bootstraps"). Also,bootingusually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, thekernelwill load theoperating systemwhich will then take care of loading other device drivers and software as needed.
Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage, a relatively small and simple program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps"; i.e., it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand off tofirmwareand software which is loaded intomain memory. Booting often involves processes such as performingself-tests, loadingconfigurationsettings, loading aBIOS,resident monitors, ahypervisor, anoperating system, orutility software.
The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused ahardwired programto read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953.[12]
Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g.,ed) and anassemblerprogram. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have agraphicalIDEand an extremelyhigh-level programming language.
Historically, bootstrapping also refers to an early technique for computer program development on new hardware. The technique described in this paragraph has been replaced by the use of across compilerexecuted by a pre-existing computer. Bootstrapping in program development began during the 1950s when each program was constructed on paper in decimal code or in binary code, bit by bit (1s and 0s), because there was no high-level computer language, nocompiler, no assembler, and nolinker. A tiny assembler program was hand-coded for a new computer (for example theIBM 650) which converted a few instructions into binary or decimal code: A1. This simple assembler program was then rewritten in its just-definedassembly languagebut with extensions that would enable the use of some additional mnemonics for more complex operation codes. The enhanced assembler's source program was then assembled by its predecessor's executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those enhancements available), until the entire instruction set was coded, branch addresses were automatically calculated, and other conveniences (such as conditional assembly, macros, optimisations, etc.) established. This was how the earlySymbolic Optimal Assembly Program(SOAP) was developed. Compilers, linkers, loaders, and utilities were then coded in assembly language, further continuing the bootstrapping process of developing complex software systems by using simpler software.
The term was also championed byDoug Engelbartto refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). HisSRIteam that developed theNLShypertext system applied this strategy by using the tool they had developed to improve the tool.
The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion.
During the installation of computer programs, it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file (e.g.,setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process.
A bootstrapping node, also known as a rendezvous host,[13]is anodein anoverlay networkthat provides initial configuration information to newly joining nodes so that they may successfully join the overlay network.[14][15]
A type ofcomputer simulationcalleddiscrete-event simulationrepresents the operation of a system as a chronological sequence of events. A technique calledbootstrapping the simulation modelis used, which bootstraps initial data points using apseudorandom number generatorto schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches itssteady state—the bootstrapping behavior is overwhelmed by steady-state behavior.
Bootstrapping is a technique used to iteratively improve aclassifier's performance. Typically, multiple classifiers will be trained on different sets of the input data, and on prediction tasks the output of the different classifiers will be combined.
Seed AIis a hypothesized type ofartificial intelligencecapable ofrecursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about thetechnological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era.[16][17]
Bootstrapping is aresamplingtechnique used to obtain estimates of summary statistics.
Bootstrapping in businessmeans starting a business without external help or working capital. Entrepreneurs in the startup development phase of their company survive through internal cash flow and are very cautious with their expenses.[18]Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process.[19]Bootstrapping can also be a supplement foreconometricmodels.[20]Bootstrapping was also expanded upon in the bookBootstrap Businessby Richard Christiansen, the Harvard Business Review articleThe Art of Bootstrappingand the follow-up bookThe Origin and Evolution of New Businessesby Amar Bhide. There is also an entirebiblewritten on how to properly bootstrap bySeth Godin.
Experts have noted that several common stages exist for bootstrapping a business venture:
There are many types of companies that are eligible for bootstrapping. Early-stage companies that do not necessarily require large influxes of capital (particularly from outside sources) qualify. This would specifically allow for flexibility for the business and time to grow.Serial entrepreneurcompanies could also possibly reap the benefits of bootstrapping. These are organizations whereby the founder has money from the sale of a previous companies they can use to invest.
There are different methods of bootstrapping. Future business owners aspiring to use bootstrapping as way of launching their product or service often use the following methods:
Bootstrapping is often considered successful. When taking into account statistics provided by Fundera, approximately 77% of small business rely on some sort of personal investment and or savings in order to fund their startup ventures. The average small business venture requires approximately $10,000 in startup capital with a third of small business launching with less than $5,000 bootstrapped.
Based on startup data presented by Entrepreneur.com, in comparison other methods of funding, bootstrapping is more commonly used than others. "0.91% of startups are funded by angel investors, while 0.05% are funded by VCs. In contrast, 57 percent of startups are funded by personal loans and credit, while 38 percent receive funding from family and friends."[21]
Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses includeserial entrepreneurMark Cuban. He has publicly endorsed bootstrapping claiming that "If you can start on your own … do it by [yourself] without having to go out and raise money." When asked why he believed this approach was most necessary, he replied, "I think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that's not an accomplishment, that's an obligation" because "now, you're reporting to whoever you raised money from."[22]
Bootstrapped companies such as Apple Inc. (APPL), eBay Inc. (EBAY) and Coca-Cola Co. have also claimed that they attribute some of their success to the fact that this method of funding enables them to remain highly focused on a specific array of profitable product.
Startupscan grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline.[23]In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business. This leaves startups with a betterexit strategywith greater returns.
Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e. borrowing by the acquired company.
Operation Bootstrap(Operación Manos a la Obra) refers to the ambitious projects that industrializedPuerto Ricoin the mid-20th century.
Richard Dawkinsin his bookRiver Out of Eden[24]used the computer bootstrapping concept to explain how biological cells differentiate: "Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells."
Bootstrapping analysis gives a way to judge the strength of support for clades onphylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve thecladeat the endpoints of that branch.[25]
Bootstrapping is a rule preventing the admission ofhearsayevidence in conspiracy cases.
Bootstrapping is a theory oflanguage acquisition.
Whitworth's three plates methoddoes not rely other flat reference surfaces or other precision instruments, and thus solves the problem of how to create a better precise flat surface.
Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators.
Intokamakfusion devices, bootstrapping refers to the process in which abootstrap currentis self-generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs.
Bootstrapping ininertial confinement fusionrefers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain.
Bootstrapping is a form ofpositive feedbackin analog circuit design.
An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started.
Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. Ablack startis the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid.
A nuclear power plant always needs to have a way to remove decay heat, which is usually done with electrical cooling pumps. But in the rare case of a complete loss of electrical power, this can still be achieved by booting a turbine generator. As steam builds up in the steam generator, it can be used to power the turbine generator (initially with no oil pumps, circ water pumps, or condensation pumps). Once the turbine generator is producing electricity, the auxiliary pumps can be powered on, and the reactor cooling pumps can be run momentarily. Eventually the steam pressure will become insufficient to power the turbine generator, and the process can be shut down in reverse order. The process can be repeated until no longer needed. This can cause great damage to the turbine generator, but more importantly, it saves the nuclear reactor.
A Bootstrapping Server Function (BSF) is an intermediary element incellular networkswhich provides application independent functions for mutualauthenticationof user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards.
|
https://en.wikipedia.org/wiki/Bootstrapping
|
Inprobability theoryandstatistics,empirical likelihood(EL) is a nonparametric method for estimating the parameters ofstatistical models. It requires fewer assumptions about theerror distributionwhile retaining some of the merits inlikelihood-based inference. The estimation method requires that the data areindependent and identically distributed(iid). It performs well even when the distribution isasymmetricorcensored.[1]EL methods can also handle constraints and prior information on parameters.Art Owenpioneered work in this area with his 1988 paper.[2]
Given a set ofn{\displaystyle n}i.i.d. realizationsyi{\displaystyle y_{i}}of random variablesYi{\displaystyle Y_{i}}, then theempirical distribution functionisF^(y):=∑i=1nπiI(Yi<y){\displaystyle {\hat {F}}(y):=\sum _{i=1}^{n}\pi _{i}I(Y_{i}<y)}, with the indicator functionI{\displaystyle I}and the (normalized) weightsπi{\displaystyle \pi _{i}}.
Then, the empirical likelihood is:[3]
whereδy{\displaystyle \delta y}is a small number (potentially the difference to the next smaller sample).
Empirical likelihood estimation can be augmented with side information by using further constraints (similar to thegeneralized estimating equationsapproach) for the empirical distribution function.
E.g. a constraint like the following can be incorporated using a Lagrange multiplierE[h(Y;θ)]=∫−∞∞h(y;θ)dF=0{\displaystyle E[h(Y;\theta )]=\int _{-\infty }^{\infty }h(y;\theta )dF=0}which impliesE^[h(y;θ)]=∑i=1nh(yi;θ)πi=0{\displaystyle {\hat {E}}[h(y;\theta )]=\sum _{i=1}^{n}h(y_{i};\theta )\pi _{i}=0}.
With similar constraints, we could also model correlation.
The empirical-likelihood method can also be also employed fordiscrete distributions.[4]Givenpi:=F^(yi)−F^(yi−δy),i=1,...,n{\displaystyle \ p_{i}:={\hat {F}}(y_{i})-{\hat {F}}(y_{i}-\delta y),\ i=1,...,n}such thatpi≥0and∑i=1npi=1.{\displaystyle p_{i}\geq 0{\text{ and }}\sum _{i=1}^{n}\ p_{i}=1.}
Then the empirical likelihood is againL(p1,...,pn)=∏i=1npi{\displaystyle L(p_{1},...,p_{n})=\prod _{i=1}^{n}\ p_{i}}.
Using theLagrangian multipliermethod to maximize the logarithm of the empirical likelihood subject to the trivial normalization constraint, we findpi=1/n{\displaystyle p_{i}=1/n}as a maximum. Therefore,F^{\displaystyle {\hat {F}}}is theempirical distribution function.
EL estimates are calculated by maximizing the empiricallikelihood function(see above) subject to constraints based on theestimating functionand the trivial assumption that the probability weights of the likelihood function sum to 1.[5]This procedure is represented as:
subject to the constraints
The value of the theta parameter can be found by solving theLagrangian function
There is a clear analogy between this maximization problem and the one solved formaximum entropy.
The parametersπi{\displaystyle \pi _{i}}arenuisance parameters.
An empirical likelihood ratio function is defined and used to obtain confidence intervals parameter of interest θ similar to parametric likelihood ratio confidence intervals.[7][8]Let L(F) be the empirical likelihood of functionF{\displaystyle F}, then the ELR would be:
R(F)=L(F)/L(Fn){\displaystyle R(F)=L(F)/L(F_{n})}.
Consider sets of the form
C={T(F)|R(F)≥r}{\displaystyle C=\{T(F)|R(F)\geq r\}}.
Under such conditions a test ofT(F)=t{\displaystyle T(F)=t}rejects when t does not belong toC{\displaystyle C}, that is, when no distribution F withT(F)=t{\displaystyle T(F)=t}has likelihoodL(F)≥rL(Fn){\displaystyle L(F)\geq rL(F_{n})}.
The central result is for the mean of X. Clearly, some restrictions onF{\displaystyle F}are needed, or elseC=Rp{\displaystyle C=\mathbb {R} ^{p}}wheneverr<1{\displaystyle r<1}. To see this, let:
F=ϵδx+(1−ϵ)Fn{\displaystyle F=\epsilon \delta _{x}+(1-\epsilon )F_{n}}
Ifϵ{\displaystyle \epsilon }is small enough andϵ>0{\displaystyle \epsilon >0}, thenR(F)≥r{\displaystyle R(F)\geq r}.
But then, asx{\displaystyle x}ranges throughRp{\displaystyle \mathbb {R} ^{p}}, so does the mean ofF{\displaystyle F}, tracing outC=Rp{\displaystyle C=\mathbb {R} ^{p}}. The problem can be solved by restricting to distributions F that are supported in a bounded set. It turns out to be possible to restrict attention t distributions with support in the sample, in other words, to distributionF≪Fn{\displaystyle F\ll F_{n}}. Such method is convenient since the statistician might not be willing to specify a bounded support forF{\displaystyle F}, and sincet{\displaystyle t}converts the construction ofC{\displaystyle C}into a finite dimensional problem.
The use of empirical likelihood is not limited to confidence intervals. In efficientquantile regression, an EL-based categorization[9]procedure helps determine the shape of the true discrete distribution at level p, and also provides a way of formulating a consistent estimator. In addition, EL can be used in place of parametric likelihood to formmodel selectioncriteria.[10]Empirical likelihood can naturally be applied insurvival analysis[11]or regression problems[12]
|
https://en.wikipedia.org/wiki/Empirical_likelihood
|
Instatistics,imputationis the process of replacingmissing datawith substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount ofbias, make the handling and analysis of the data more arduous, and create reductions inefficiency.[1]Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved withlistwise deletionof cases that have missing values. That is to say, when one or more values are missing for a case, moststatistical packagesdefault to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data.[2]There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include:hot deckandcold deckimputation;listwise and pairwise deletion;mean imputation;non-negative matrix factorization;regression imputation;last observation carried forward;stochastic imputation; andmultiple imputation.
By far, the most common means of dealing with missing data is listwise deletion (also known as complete case), which is when all cases with a missing value are deleted. If the data aremissing completely at random, then listwise deletion does not add any bias, but it does decrease thepowerof the analysis by decreasing the effective sample size. For example, if 1000 cases are collected but 80 have missing values, the effective sample size after listwise deletion is 920. If the cases are not missing completely at random, then listwise deletion will introduce bias because the sub-sample of cases represented by the missing data are not representative of the original sample (and if the original sample was itself a representative sample of a population, the complete cases are not representative of that population either).[3]While listwise deletion is unbiased when the missing data is missing completely at random, this is rarely the case in actuality.[4]
Pairwise deletion (or "available case analysis") involves deleting a case when it is missing a variable required for a particular analysis, but including that case in analyses for which all required variables are present. When pairwise deletion is used, the total N for analysis will not be consistent across parameter estimations. Because of the incomplete N values at some points in time, while still maintaining complete case comparison for other parameters, pairwise deletion can introduce impossible mathematical situations such as correlations that are over 100%.[5]
The one advantage complete case deletion has over other methods is that it is straightforward and easy to implement. This is a large reason why complete case is the most popular method of handling missing data in spite of the many disadvantages it has.
A once-common method of imputation was hot-deck imputation where a missing value was imputed from a randomly selected similar record. The term "hot deck" dates back to the storage of data onpunched cards, and indicates that the information donors come from the same dataset as the recipients. The stack of cards was "hot" because it was currently being processed.
One form of hot-deck imputation is called "last observation carried forward" (or LOCF for short), which involves sorting a dataset according to any of a number of variables, thus creating an ordered dataset. The technique then finds the first missing value and uses the cell value immediately prior to the data that are missing to impute the missing value. The process is repeated for the next cell with a missing value until all missing values have been imputed. In the common scenario in which the cases are repeated measurements of a variable for a person or other entity, this represents the belief that if a measurement is missing, the best guess is that it hasn't changed from the last time it was measured. This method is known to increase risk of increasing bias and potentially false conclusions. For this reason LOCF is not recommended for use.[6]
Cold-deck imputation, by contrast, selects donors from another dataset. Due to advances in computer power, more sophisticated methods of imputation have generally superseded the original random and sorted hot deck imputation techniques. It is a method of replacing with response values of similar items in past surveys. It is available in surveys that measure time intervals.
Another imputation technique involves replacing any missing value with the mean of that variable for all other cases, which has the benefit of not changing the sample mean for that variable. However, mean imputation attenuates any correlations involving the variable(s) that are imputed. This is because, in cases with imputation, there is guaranteed to be no relationship between the imputed variable and any other measured variables. Thus, mean imputation has some attractive properties for univariate analysis but becomes problematic for multivariate analysis.
Mean imputation can be carried out within classes (e.g. categories such as gender), and can be expressed asy^i=y¯h{\displaystyle {\hat {y}}_{i}={\bar {y}}_{h}}wherey^i{\displaystyle {\hat {y}}_{i}}is the imputed value for recordi{\displaystyle i}andy¯h{\displaystyle {\bar {y}}_{h}}is the sample mean of respondent data within some classh{\displaystyle h}. This is a special case of generalized regression imputation:
y^mi=br0+∑jbrjzmij+e^mi{\displaystyle {\hat {y}}_{mi}=b_{r0}+\sum _{j}b_{rj}z_{mij}+{\hat {e}}_{mi}}
Here the valuesbr0,brj{\displaystyle b_{r0},b_{rj}}are estimated from regressingy{\displaystyle y}onx{\displaystyle x}in non-imputed data,z{\displaystyle z}is adummy variablefor class membership, and data are split into respondent (r{\displaystyle r}) and missing (m{\displaystyle m}).[7][8]
Non-negative matrix factorization(NMF) can take missing data while minimizing its cost function, rather than treating these missing data as zeros that could introduce biases.[9]This makes it a mathematically proven method for data imputation. NMF can ignore missing data in the cost function, and the impact from missing data can be as small as a second order effect.
Regression imputation has the opposite problem of mean imputation. Aregression modelis estimated to predict observed values of a variable based on other variables, and that model is then used to impute values in cases where the value of that variable is missing. In other words, available information for complete and incomplete cases is used to predict the value of a specific variable. Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have anerror termincluded in their estimation, thus the estimates fit perfectly along the regression line without any residualvariance. This causes relationships to be over-identified and suggest greater precision in the imputed values than is warranted. The regression model predicts the most likely value of missing data but does not supply uncertainty about that value.
Stochasticregression was a fairly successful attempt to correct the lack of an error term in regression imputation by adding the average regression variance to the regression imputations to introduce error. Stochastic regression shows much less bias than the above-mentioned techniques, but it still missed one thing – if data are imputed then intuitively one would think that more noise should be introduced to the problem than simple residual variance.[5]
In order to deal with the problem of increased noise due to imputation, Rubin (1987)[10]developed a method for averaging the outcomes across multiple imputed data sets to account for this. All multiple imputation methods follow three steps.[3]
Multiple imputation can be used in cases where the data aremissing completely at random,missing at random, andmissing not at random, though it can be biased in the latter case.[14]One approach is multiple imputation by chained equations (MICE), also known as "fully conditional specification" and "sequential regression multiple imputation."[15]MICE is designed for missing at random data, though there is simulation evidence to suggest that with a sufficient number of auxiliary variables it can also work on data that are missing not at random. However, MICE can suffer from performance problems when the number of observation is large and the data have complex features, such as nonlinearities and high dimensionality.
More recent approaches to multiple imputation use machine learning techniques to improve its performance. MIDAS (Multiple Imputation with Denoising Autoencoders), for instance, uses denoisingautoencoders, a type of unsupervised neural network, to learn fine-grained latent representations of the observed data.[16]MIDAS has been shown to provide accuracy and efficiency advantages over traditional multiple imputation strategies.
As alluded in the previous section, single imputation does not take into account the uncertainty in the imputations. After imputation, the data is treated as if they were the actual real values in single imputation. The negligence of uncertainty in the imputation can lead to overly precise results and errors in any conclusions drawn.[17]By imputing multiple times, multiple imputation accounts for the uncertainty and range of values that the true value could have taken. As expected, the combination of both uncertainty estimation and deep learning for imputation is among the best strategies and has been used to model heterogeneous drug discovery data.[18][19]
Additionally, while single imputation and complete case are easier to implement, multiple imputation is not very difficult to implement. There are a wide range of statistical packages indifferent statistical softwarethat readily performs multiple imputation. For example, the MICE package allows users inRto perform multiple imputation using the MICE method.[20]MIDAS can be implemented in R with the rMIDAS package and in Python with the MIDASpy package.[16]
Where Matrix/Tensor factorization or decomposition algorithms predominantly uses global structure for imputing data, algorithms like piece-wise linear interpolation and spline regression use time-localized trends for estimating missing information in time series. Where former is more effective for estimating larger missing gaps, the latter works well only for small-length missing gaps. SPRINT (Spline-powered Informed Tensor Decomposition) algorithm is proposed in literature which capitalizes the strengths of the two and combine them in an iterative framework for enhanced estimation of missing information, especially effective for datasets, which have both long and short-length missing gaps.[21]
|
https://en.wikipedia.org/wiki/Imputation_(statistics)
|
Instatisticsandpsychometrics,reliabilityis the overall consistency of a measure.[1]A measure is said to have a high reliability if it produces similar results under consistent conditions:
It is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are highly reliable are precise, reproducible, and consistent from one testing occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 (much error) and 1.00 (no error), are usually used to indicate the amount of error in the scores.[2]
For example, measurements of people's height and weight are often extremely reliable.[3][4]
There are several general classes of reliability estimates:
Reliability does not implyvalidity. That is, a reliable measure that is measuring something consistently is not necessarily measuring what is supposed to be measured. For example, while there are many reliable tests of specific abilities, not all of them would be valid for predicting, say, job performance.
While reliability does not implyvalidity, reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.[7]
For example, if a set ofweighing scalesconsistently measured the weight of an object as 500 grams over the true weight, then the scale would be very reliable, but it would not be valid (as the returned weight is not the true weight). For the scale to be valid, it should return the true weight of an object. This example demonstrates that a perfectly reliable measure is not necessarily valid, but that a valid measure necessarily must be reliable.
In practice, testing measures are never perfectly consistent. Theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement. The basic starting point for almost all theories of test reliability is the idea that test scores reflect the influence of two sorts of factors:[7]
These factors include:[7]
The goal of estimating reliability is to determine how much of the variability in test scores is due tomeasurement errorsand how much is due to variability intrue scores(true value).[7]
Atrue scoreis the replicable feature of the concept being measured. It is the part of the observed score that would recur across different measurement occasions in the absence of error.
Errors of measurement are composed of bothrandom errorandsystematic error. It represents the discrepancies between scores obtained on tests and the corresponding true scores.
This conceptual breakdown is typically represented by the simple equation:
X=T+E{\displaystyle X=T+E}where X is the observed test score, T is the true score, and E is the measurement error
The goal of reliability theory is to estimate errors in measurement and to suggest ways of improving tests so that errors are minimized.
The central assumption of reliability theory is that measurement errors are essentially random. This does not mean that errors arise from random processes. For any individual, an error in measurement is not a completely random event. However, across a large number of individuals, the causes of measurement error are assumed to be so varied that measure errors act as random variables.[7]
If errors have the essential characteristics of random variables, then it is reasonable to assume that errors are equally likely to be positive or negative, and that they are not correlated with true scores or with errors on other tests.
It is assumed that:[8]
Reliability theory shows that the variance of obtained scores is simply the sum of the variance oftrue scoresplus the variance oferrors of measurement.[7]
This equation suggests that test scores vary as the result of two factors:
The reliability coefficientρxx′{\displaystyle \rho _{xx'}}provides an index of the relative influence of true and error scores on attained test scores. In its general form, the reliability coefficient is defined as the ratio oftrue scorevariance to the total variance of test scores. Or, equivalently, one minus the ratio of the variation of theerror scoreand the variation of theobserved score:
Unfortunately, there is no way to directly observe or calculate the true score, so a variety of methods are used to estimate the reliability of a test.
Some examples of the methods to estimate reliability includetest-retest reliability,internal consistencyreliability, andparallel-test reliability. Each method comes at the problem of figuring out the source of error in the test somewhat differently.
It was well known to classical test theorists that measurement precision is not uniform across the scale of measurement. Tests tend to distinguish better for test-takers with moderate trait levels and worse among high- and low-scoring test-takers.Item response theoryextends the concept of reliability from a single index to a function called theinformation function. The IRT information function is the inverse of the conditional observed score standard error at any given test score.
The goal of estimating reliability is to determine how much of the variability in test scores is due to errors in measurement and how much is due to variability in true scores.
Four practical strategies have been developed that provide workable methods of estimating test reliability:[7]
Thetest-retest reliabilitymethod directly assesses the degree to which test scores are consistent from one test administration to the next. It involves:
The correlation between scores on the first test and the scores on the retest is used to estimate the reliability of the test using thePearson product-moment correlation coefficient: see alsoitem-total correlation.
The key to this method is the development of alternate test forms that are equivalent in terms of content, response processes and statistical characteristics. For example, alternate forms exist for several tests of general intelligence, and these tests are generally seen equivalent.[7]
With the parallel test model it is possible to develop two forms of a test that are equivalent in the sense that a person's true score on form A would be identical to their true score on form B. If both forms of the test were administered to a number of people, differences between scores on form A and form B may be due to errors in measurement only.[7]It involves:
The correlation between scores on the two alternate forms is used to estimate the reliability of the test.
This method provides a partial solution to many of the problems inherent in thetest-retest reliabilitymethod. For example, since the two forms of the test are different,carryover effectis less of a problem. Reactivity effects are also partially controlled; although taking the first test may change responses to the second test. However, it is reasonable to assume that the effect will not be as strong with alternate forms of the test as with two administrations of the same test.[7]
However, this technique has its disadvantages:
This method treats the two halves of a measure as alternate forms. It provides a simple solution to the problem that the parallel-forms method faces: the difficulty in developing alternate forms.[7]It involves:
The correlation between these two split halves is used in estimating the reliability of the test. This halves reliability estimate is then stepped up to the full test length using theSpearman–Brown prediction formula.
There are several ways of splitting a test to estimate reliability. For example, a 40-item vocabulary test could be split into two subtests, the first one made up of items 1 through 20 and the second made up of items 21 through 40. However, the responses from the first half may be systematically different from responses in the second half due to an increase in item difficulty and fatigue.[7]
In splitting a test, the two halves would need to be as similar as possible, both in terms of their content and in terms of the probable state of the respondent. The simplest method is to adopt an odd-even split, in which the odd-numbered items form one half of the test and the even-numbered items form the other. This arrangement guarantees that each half will contain an equal number of items from the beginning, middle, and end of the original test.[7]
Internal consistencyassesses the consistency of results across items within a test. The most common internal consistency measure isCronbach's alpha, which is usually interpreted as the mean of all possible split-half coefficients.[9]Cronbach's alpha is a generalization of an earlier form of estimating internal consistency,Kuder–Richardson Formula 20.[9]Although the most commonly used, there are some misconceptions regarding Cronbach's alpha.[10][11]
These measures of reliability differ in their sensitivity to different sources of error and so need not be equal. Also, reliability is a property of thescores of a measurerather than the measure itself and are thus said to besample dependent. Reliability estimates from one sample might differ from those of a second sample (beyond what might be expected due to sampling variations) if the second sample is drawn from a different population because the true variability is different in this second population. (This is true of measures of all types—yardsticks might measure houses well yet have poor reliability when used to measure the lengths of insects.)
Reliability may be improved by clarity of expression (for written assessments), lengthening the measure,[9]and other informal means. However, formal psychometric analysis, calleditem analysis, is considered the most effective way to increase reliability. This analysis consists of computation ofitem difficultiesanditem discriminationindices, the latter index involving computation of correlations between the items and sum of the item scores of the entire test. If items that are too difficult, too easy, and/or have near-zero or negative discrimination are replaced with better items, the reliability of the measure will increase.
|
https://en.wikipedia.org/wiki/Reliability_(statistics)
|
Reproducibility, closely related toreplicabilityandrepeatability, is a major principle underpinning thescientific method. For the findings of a study to be reproducible means that results obtained by anexperimentor anobservational studyor in astatistical analysisof adata setshould be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication[1]but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.
There are different kinds of replication studies, each serving a unique role in scientific validation:
Direct Replication – The exact experiment or study is repeated under the same conditions to verify the original findings.
Conceptual Replication – A study tests the same hypothesis but uses a different methodology, materials, or population to see if the results hold in different contexts.
Computational Reproducibility – In data science and computational research, reproducibility requires making all datasets, code, and algorithms openly available so others can replicate the analysis and obtain the same results.
Reproducibility serves several critical purposes in science:
Verification of Results – Confirms that findings are not due to random chance or errors.
Building Trust in Research – Scientists, policymakers, and the public rely on reproducible studies to make informed decisions.
Advancing Knowledge – Establishes a strong foundation for future research by validating existing theories.
Avoiding Bias and Fraud – Helps detect false positives, publication bias, and data manipulation that could mislead the scientific community.
Challenges in Achieving Reproducibility
Despite its importance, many studies fail reproducibility tests, leading to what is known as the replication crisis in fields like psychology, medicine, and social sciences. Some key challenges include:
Insufficient Data Sharing – Many researchers do not make raw data, code, or methodology openly available, making replication difficult.
Small Sample Sizes – Studies with limited sample sizes may show results that do not generalize to larger populations.
Publication Bias – Journals tend to publish positive findings rather than null or negative results, leading to an incomplete scientific record.
Complex Experimental Conditions – In some cases, small variations in laboratory settings, equipment, or researcher expertise can affect outcomes, making exact replication difficult.
Medical Research – Reproducibility ensures that clinical trials and drug effectiveness studies produce reliable results before treatments reach the public.
AI and Machine Learning – Scientists emphasize reproducibility in AI by requiring open-source models and datasets to validate algorithm performance.
Climate Science – Climate models must be reproducible across different datasets and simulations to ensure accurate predictions of global warming.
Pharmaceutical Development – Drug discovery relies on reproducing experiments across multiple labs to ensure safety and efficacy.
To enhance reproducibility, researchers and institutions can adopt several best practices:
Open Data and Code – Making datasets and computational methods publicly available ensures that others can verify results.
Registered Reports – Some scientific journals now accept studies based on pre-registered research plans, reducing bias.
Standardized Methods – Using well-documented, standardized experimental protocols helps ensure consistent results.
Independent Replication Studies – Funding agencies and journals should prioritize replication studies to strengthen scientific integrity.
With a narrower scope,reproducibilityhas been defined incomputational sciencesas having the following quality: the results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.
In recent decades, there has been a rising concern that many published scientific results fail the test of reproducibility, evoking a reproducibility orreplication crisis.
The first to stress the importance of reproducibility in science was the Anglo-Irish chemistRobert Boyle, inEnglandin the 17th century. Boyle'sair pumpwas designed to generate and studyvacuum, which at the time was a very controversial concept. Indeed, distinguished philosophers such asRené DescartesandThomas Hobbesdenied the very possibility of vacuum existence.Historians of scienceSteven ShapinandSimon Schaffer, in their 1985 bookLeviathan and the Air-Pump, describe the debate between Boyle and Hobbes, ostensibly over the nature of vacuum, as fundamentally an argument about how useful knowledge should be gained. Boyle, a pioneer of theexperimental method, maintained that the foundations of knowledge should be constituted by experimentally produced facts, which can be made believable to a scientific community by their reproducibility. By repeating the same experiment over and over again, Boyle argued, the certainty of fact will emerge.
The air pump, which in the 17th century was a complicated and expensive apparatus to build, also led to one of the first documented disputes over the reproducibility of a particularscientific phenomenon. In the 1660s, the Dutch scientistChristiaan Huygensbuilt his own air pump inAmsterdam, the first one outside the direct management of Boyle and his assistant at the timeRobert Hooke. Huygens reported an effect he termed "anomalous suspension", in which water appeared to levitate in a glass jar inside his air pump (in fact suspended over an air bubble), but Boyle and Hooke could not replicate this phenomenon in their own pumps. As Shapin and Schaffer describe, "it became clear that unless the phenomenon could be produced in England with one of the two pumps available, then no one in England would accept the claims Huygens had made, or his competence in working the pump". Huygens was finally invited to England in 1663, and under his personal guidance Hooke was able to replicate anomalous suspension of water. Following this Huygens was elected a Foreign Member of theRoyal Society. However, Shapin and Schaffer also note that "the accomplishment of replication was dependent on contingent acts of judgment. One cannot write down a formula saying when replication was or was not achieved".[2]
Thephilosopher of scienceKarl Poppernoted briefly in his famous 1934 bookThe Logic of Scientific Discoverythat "non-reproducible single occurrences are of no significance to science".[3]ThestatisticianRonald Fisherwrote in his 1935 bookThe Design of Experiments, which set the foundations for the modern scientific practice ofhypothesis testingandstatistical significance, that "we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results".[4]Such assertions express a commondogmain modern science that reproducibility is a necessary condition (although not necessarilysufficient) for establishing a scientific fact, and in practice for establishing scientific authority in any field of knowledge. However, as noted above by Shapin and Schaffer, this dogma is not well-formulated quantitatively, such as statistical significance for instance, and therefore it is not explicitly established how many times must a fact be replicated to be considered reproducible.
Replicabilityandrepeatabilityare related terms broadly or loosely synonymous with reproducibility (for example, among the general public), but they are often usefully differentiated in more precise senses, as follows.
Two major steps are naturally distinguished in connection with reproducibility of experimental or observational studies:
When new data is obtained in the attempt to achieve it, the termreplicabilityis often used, and the new study is areplicationorreplicateof the original one. Obtaining the same results when analyzing the data set of the original study again with the same procedures, many authors use the termreproducibilityin a narrow, technical sense coming from its use in computational research.Repeatabilityis related to therepetitionof the experiment within the same study by the same researchers.
Reproducibility in the original, wide sense is only acknowledged if a replication performed by anindependent researcher teamis successful.
The terms reproducibility and replicability sometimes appear even in the scientific literature with reversed meaning,[5][6]as different research fields settled on their own definitions for the same terms.[7]
In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning.[8]In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is calledreproducibility.[9]These measures are related to the more general concept ofvariance componentsinmetrology.
The termreproducible researchrefers to the idea that scientific results should be documented in such a way that their deduction is fully transparent. This requires a detailed description of the methods used to obtain the data[10][11]and making the full dataset and the code to calculate the results easily accessible.[12][13][14][15][16][17]This is the essential part ofopen science.
To make any research project computationally reproducible, general practice involves all data and files being clearly separated, labelled, and documented. All operations should be fully documented and automated as much as practicable, avoiding manual intervention where feasible. The workflow should be designed as a sequence of smaller steps that are combined so that the intermediate outputs from one step directly feed as inputs into the next step. Version control should be used as it lets the history of the project be easily reviewed and allows for the documenting and tracking of changes in a transparent manner.
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.[18]
There are systems that facilitate such documentation, like theRMarkdownlanguage[19]or theJupyternotebook.[20][21][22]TheOpen Science Frameworkprovides a platform and useful tools to support reproducible research.
Psychology has seen a renewal of internal concerns about irreproducible results (see the entry onreplicability crisisfor empirical results on success rates of replications). Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a six-month period.[23]In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%).[24]In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration.[25]In 2017, an article published inScientific Datasuggested that this may not be sufficient and that the whole analysis context should be disclosed.[26]
In economics, concerns have been raised in relation to the credibility and reliability of published research. In other sciences, reproducibility is regarded as fundamental and is often a prerequisite to research being published, however in economic sciences it is not seen as a priority of the greatest importance. Most peer-reviewed economic journals do not take any substantive measures to ensure that published results are reproducible, however, the top economics journals have been moving to adopt mandatory data and code archives.[27]There is low or no incentives for researchers to share their data, and authors would have to bear the costs of compiling data into reusable forms. Economic research is often not reproducible as only a portion of journals have adequate disclosure policies for datasets and program code, and even if they do, authors frequently do not comply with them or they are not enforced by the publisher. A Study of 599 articles published in 37 peer-reviewed journals revealed that while some journals have achieved significant compliance rates, significant portion have only partially complied, or not complied at all. On an article level, the average compliance rate was 47.5%; and on a journal level, the average compliance rate was 38%, ranging from 13% to 99%.[28]
A 2018 study published in the journalPLOS ONEfound that 14.4% of a sample of public health statistics researchers had shared their data or code or both.[29]
There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, beginning with theCONSORTinitiative, which is now part of a wider initiative, theEQUATOR Network.
This group has recently turned its attention to how better reporting might reduce waste in research,[30]especially biomedical research.
Reproducible research is key to new discoveries inpharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible.[31]
Some efforts have been made to imcrease replicability beyond the social and biomedical sciences. Studies in the humanities tend to rely more on expertise and hermeneutics which may make replicability more difficult. Nonetheless, some efforts have been made to call for more transparency and documentation in the humanities.[32]
Hideyo Noguchibecame famous for correctly identifying the bacterial agent ofsyphilis, but also claimed that he could culture this agent in his laboratory. Nobody else has been able to produce this latter result.[33]
In March 1989,University of Utahchemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process ("cold fusion"). The report was astounding given the simplicity of the equipment: it was essentially anelectrolysiscell containingheavy waterand apalladiumcathodewhich rapidly absorbed thedeuteriumproduced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world (seescience by press conference). Over the next several months others tried to replicate the experiment, but were unsuccessful.[34]
Nikola Teslaclaimed as early as 1899 to have used a high frequency current to light gas-filled lamps from over 25 miles (40 km) awaywithout using wires. In 1904 he builtWardenclyffe ToweronLong Islandto demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed due to economic problems, so no attempt to reproduce his first result was ever carried out.[35]
Other examples which contrary evidence has refuted the original claim:
|
https://en.wikipedia.org/wiki/Reproducibility
|
Instatistics,resamplingis the creation of new samples based on one observed sample.
Resampling methods are:
Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Bootstrapping is a statistical method for estimating thesampling distributionof anestimatorbysamplingwith replacement from the original sample, most often with the purpose of deriving robust estimates ofstandard errorsandconfidence intervalsof a population parameter like amean,median,proportion,odds ratio,correlation coefficientorregressioncoefficient. It has been called theplug-in principle,[1]as it is the method ofestimationof functionals of a population distribution by evaluating the same functionals at theempirical distributionbased on a sample.
For example,[1]when estimating thepopulationmean, this method uses thesamplemean; to estimate the populationmedian, it uses the sample median; to estimate the populationregression line, it uses the sample regression line.
It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions ofparticle filters,genetic type algorithmsand related resample/reconfigurationMonte Carlo methodsused incomputational physics.[2][3]In this context, the bootstrap is used to replace sequentially empirical weighted probability measures byempirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights.
Cross-validation is a statistical method for validating apredictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. Cross-validation is employed repeatedly in building decision trees.
One form of cross-validation leaves out a single observation at a time; this is similar to thejackknife. Another,K-fold cross-validation, splits the data intoKsubsets; each is held out in turn as the validation set.
This avoids "self-influence". For comparison, inregression analysismethods such aslinear regression, eachyvalue draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts theyvalue for each observation without using that observation.
This is often used for deciding how many predictor variables to use in regression. Without cross-validation, adding predictors always reduces the residual sum of squares (or possibly leaves it unchanged). In contrast, the cross-validated mean-square error will tend to decrease if valuable predictors are added, but increase if worthless predictors are added.[4]
Subsampling is an alternative method for approximating the sampling distribution of an estimator. The two key differences to the bootstrap are:
The advantage of subsampling is that it is valid under much weaker conditions compared to the bootstrap. In particular, a set of sufficient conditions is that the rate of convergence of the estimator is known and that the limiting distribution is continuous.
In addition, the resample (or subsample) size must tend to infinity together with the sample size but at a smaller rate, so that their ratio converges to zero. While subsampling was originally proposed for the case of independent and identically distributed (iid) data only, the methodology has been extended to cover time series data as well; in this case, one resamples blocks of subsequent data rather than individual data points. There are many cases of applied interest where subsampling leads to valid inference whereas bootstrapping does not; for example, such cases include examples where the rate of convergence of the estimator is not the square root of the sample size or when the limiting distribution is non-normal. When both subsampling and the bootstrap are consistent, the bootstrap is typically more accurate.RANSACis a popular algorithm using subsampling.
Jackknifing (jackknife cross-validation), is used instatistical inferenceto estimate the bias and standard error (variance) of a statistic, when a random sample of observations is used to calculate it. Historically, this method preceded the invention of the bootstrap withQuenouilleinventing this method in 1949 andTukeyextending it in 1958.[5][6]This method was foreshadowed byMahalanobiswho in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random.[7]He coined the name 'interpenetrating samples' for this method.
Quenouille invented this method with the intention of reducing the bias of the sample estimate. Tukey extended this method by assuming that if the replicates could be considered identically and independently distributed, then an estimate of the variance of the sample parameter could be made and that it would be approximately distributed as a t variate withn−1 degrees of freedom (nbeing the sample size).
The basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate, leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias and an estimate for the variance of the statistic can be calculated. Jackknife is equivalent to the random (subsampling) leave-one-out cross-validation, it only differs in the goal.[8]
For many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the jackknife estimate isconsistent. The jackknife is consistent for the samplemeans, samplevariances, central and non-central t-statistics (with possibly non-normal populations), samplecoefficient of variation,maximum likelihood estimators, least squares estimators,correlation coefficientsandregression coefficients.
It is not consistent for the samplemedian. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with twodegrees of freedom.
Instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal.
The jackknife, like the original bootstrap, is dependent on the independence of the data. Extensions of the jackknife to allow for dependence in the data have been proposed. One such extension is the delete-a-group method used in association withPoisson sampling.
Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other. Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that thebootstrapgives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g., official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution, the bootstrap is preferred (e.g., studies in physics, economics, biological sciences).
Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey. The jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference (e.g., hypothesis testing, confidence intervals). The bootstrap, on the other hand, first estimates the whole distribution (of the point estimator) and then computes the variance from that. While powerful and easy, this can become highly computationally intensive.
"The bootstrap can be applied to both variance and distribution estimation problems. However, the bootstrap variance estimator is not as good as the jackknife or thebalanced repeated replication(BRR) variance estimator in terms of the empirical results. Furthermore, the bootstrap variance estimator usually requires more computations than the jackknife or the BRR. Thus, the bootstrap is mainly recommended for distribution estimation."[attribution needed][9]
There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc.; not with medians or quantiles). This could become a practical disadvantage. This disadvantage is usually the argument favoring bootstrapping over jackknifing. More general jackknifes than the delete-1, such as the delete-m jackknife or the delete-all-but-2Hodges–Lehmann estimator, overcome this problem for the medians and quantiles by relaxing the smoothness requirements for consistent variance estimation.
Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995),[10]whereas a basic introduction is accounted in Wolter (2007).[11]The bootstrap estimate of model prediction bias is more precise than jackknife estimates with linear models such as linear discriminant function or multiple regression.[12]
|
https://en.wikipedia.org/wiki/Resampling_(statistics)
|
Instatisticsandmachine learning,ensemble methodsuse multiple learning algorithms to obtain betterpredictive performancethan could be obtained from any of the constituent learning algorithms alone.[1][2][3]Unlike astatistical ensemblein statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
Supervised learningalgorithms search through ahypothesisspace to find a suitable hypothesis that will make good predictions with a particular problem.[4]Even if this space contains hypotheses that are very well-suited for a particular problem, it may be very difficult to find a good one. Ensembles combine multiple hypotheses to form one which should be theoretically better.
Ensemble learningtrains two or more machine learning algorithms on a specificclassificationorregressiontask. The algorithms within the ensemble model are generally referred as "base models", "base learners", or "weak learners" in literature. These base models can be constructed using a single modelling algorithm, or several different algorithms. The idea is to train a diverse set of weak models on the same modelling task, such that the outputs of each weak learner have poor predictive ability (i.e., highbias), and among all weak learners, the outcome and error values exhibit highvariance. Fundamentally, an ensemble learning model trains at least two high-bias (weak) and high-variance (diverse) models to be combined into a better-performing model. The set of weak models — which would not produce satisfactory predictive results individually — are combined or averaged to produce a single, high performing, accurate, and low-variance model to fit the task as required.
Ensemble learning typically refers to bagging (bootstrap aggregating),boostingor stacking/blending techniques to induce high variance among the base models. Bagging creates diversity by generating random samples from the training observations and fitting the same model to each different sample — also known ashomogeneous parallel ensembles. Boosting follows an iterative process by sequentially training each base model on the up-weighted errors of the previous base model, producing an additive model to reduce the final model errors — also known assequential ensemble learning. Stacking or blending consists of different base models, each trained independently (i.e. diverse/high variance) to be combined into the ensemble model — producing aheterogeneous parallel ensemble. Common applications of ensemble learning includerandom forests(an extension of bagging), Boosted Tree models, andGradient BoostedTree Models. Models in applications of stacking are generally more task-specific — such as combining clustering techniques with other parametric and/or non-parametric techniques.[5]
Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning with one non-ensemble model. An ensemble may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such asdecision treesare commonly used in ensemble methods (e.g., random forests), although slower algorithms can benefit from ensemble techniques as well.
By analogy, ensemble techniques have been used also inunsupervised learningscenarios, for example inconsensus clusteringor inanomaly detection.
Empirically, ensembles tend to yield better results when there is a significant diversity among the models.[6][7]Many ensemble methods, therefore, seek to promote diversity among the models they combine.[8][9]Although perhaps non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy-reducing decision trees).[10]Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt todumb-downthe models in order to promote diversity.[11]It is possible to increase diversity in the training stage of the model using correlation for regression tasks[12]or using information measures such as cross entropy for classification tasks.[13]
Theoretically, one can justify the diversity concept because the lower bound of the error rate of an ensemble system can be decomposed into accuracy, diversity, and the other term.[14]
Ensemble learning, including both regression and classification tasks, can be explained using a geometric framework.[15]Within this framework, the output of each individual classifier or regressor for the entire dataset can be viewed as a point in a multi-dimensional space. Additionally, the target result is also represented as a point in this space, referred to as the "ideal point."
The Euclidean distance is used as the metric to measure both the performance of a single classifier or regressor (the distance between its point and the ideal point) and the dissimilarity between two classifiers or regressors (the distance between their respective points). This perspective transforms ensemble learning into a deterministic problem.
For example, within this geometric framework, it can be proved that the averaging of the outputs (scores) of all base classifiers or regressors can lead to equal or better results than the average of all the individual models. It can also be proved that if the optimal weighting scheme is used, then a weighted averaging approach can outperform any of the individual classifiers or regressors that make up the ensemble or as good as the best performer at least.
While the number of component classifiers of an ensemble has a great impact on the accuracy of prediction, there is a limited number of studies addressing this problem.A prioridetermining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. Mostly statistical tests were used for determining the proper number of components. More recently, a theoretical framework suggested that there is an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. It is called "the law of diminishing returns in ensemble construction." Their theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.[16][17]
The Bayes optimal classifier is a classification technique. It is an ensemble of all the hypotheses in the hypothesis space. On average, no other ensemble can outperform it.[18]TheNaive Bayes classifieris a version of this that assumes that the data is conditionally independent on the class and makes the computation more feasible. Each hypothesis is given a vote proportional to the likelihood that the training dataset would be sampled from a system if that hypothesis were true. To facilitate training data of finite size, the vote of each hypothesis is also multiplied by the prior probability of that hypothesis. The Bayes optimal classifier can be expressed with the following equation:
wherey{\displaystyle y}is the predicted class,C{\displaystyle C}is the set of all possible classes,H{\displaystyle H}is the hypothesis space,P{\displaystyle P}refers to aprobability, andT{\displaystyle T}is the training data. As an ensemble, the Bayes optimal classifier represents a hypothesis that is not necessarily inH{\displaystyle H}. The hypothesis represented by the Bayes optimal classifier, however, is the optimal hypothesis inensemble space(the space of all possible ensembles consisting only of hypotheses inH{\displaystyle H}).
This formula can be restated usingBayes' theorem, which says that the posterior is proportional to the likelihood times the prior:
hence,
Bootstrap aggregation (bagging) involves training an ensemble onbootstrappeddata sets. A bootstrapped set is created by selecting from original training data set with replacement. Thus, a bootstrap set may contain a given example zero, one, or multiple times. Ensemble members can also have limits on the features (e.g., nodes of a decision tree), to encourage exploring of diverse features.[19]The variance of local information in the bootstrap sets and feature considerations promote diversity in the ensemble, and can strengthen the ensemble.[20]To reduce overfitting, a member can be validated using the out-of-bag set (the examples that are not in its bootstrap set).[21]
Inference is done byvotingof predictions of ensemble members, calledaggregation. It is illustrated below with an ensemble of four decision trees. The query example is classified by each tree. Because three of the four predict thepositiveclass, the ensemble's overall classification ispositive.Random forestslike the one shown are a common application of bagging.
Boosting involves training successive models by emphasizing training data mis-classified by previously learned models. Initially, all data (D1) has equal weight and is used to learn a base model M1. The examples mis-classified by M1 are assigned a weight greater than correctly classified examples. This boosted data (D2) is used to train a second base model M2, and so on. Inference is done by voting.
In some cases, boosting has yielded better accuracy than bagging, but tends to over-fit more. The most common implementation of boosting isAdaboost, but some newer algorithms are reported to achieve better results.[citation needed]
Bayesian model averaging (BMA) makes predictions by averaging the predictions of models weighted by their posterior probabilities given the data.[22]BMA is known to generally give better answers than a single model, obtained, e.g., viastepwise regression, especially where very different models have nearly identical performance in the training set but may otherwise perform quite differently.
The question with any use ofBayes' theoremis the prior, i.e., the probability (perhaps subjective) that each model is the best to use for a given purpose. Conceptually, BMA can be used with any prior.Rpackages ensembleBMA[23]and BMA[24]use the prior implied by theBayesian information criterion, (BIC), following Raftery (1995).[25]Rpackage BAS supports the use of the priors implied byAkaike information criterion(AIC) and other criteria over the alternative models as well as priors over the coefficients.[26]
The difference between BIC and AIC is the strength of preference for parsimony. BIC's penalty for model complexity isln(n)k{\displaystyle \ln(n)k}, while AIC's is2k{\displaystyle 2k}. Large-sample asymptotic theory establishes that if there is a best model, then with increasing sample sizes, BIC is strongly consistent, i.e., will almost certainly find it, while AIC may not, because AIC may continue to place excessive posterior probability on models that are more complicated than they need to be. On the other hand, AIC and AICc are asymptotically "efficient" (i.e., minimum mean square prediction error), while BIC is not .[27]
Haussler et al. (1994) showed that when BMA is used for classification, its expected error is at most twice the expected error of the Bayes optimal classifier.[28]Burnham and Anderson (1998, 2002) contributed greatly to introducing a wider audience to the basic ideas of Bayesian model averaging and popularizing the methodology.[29]The availability of software, including other free open-source packages forRbeyond those mentioned above, helped make the methods accessible to a wider audience.[30]
Bayesian model combination (BMC) is an algorithmic correction to Bayesian model averaging (BMA). Instead of sampling each model in the ensemble individually, it samples from the space of possible ensembles (with model weights drawn randomly from a Dirichlet distribution having uniform parameters). This modification overcomes the tendency of BMA to converge toward giving all the weight to a single model. Although BMC is somewhat more computationally expensive than BMA, it tends to yield dramatically better results. BMC has been shown to be better on average (with statistical significance) than BMA and bagging.[31]
Use of Bayes' law to compute model weights requires computing the probability of the data given each model. Typically, none of the models in the ensemble are exactly the distribution from which the training data were generated, so all of them correctly receive a value close to zero for this term. This would work well if the ensemble were big enough to sample the entire model-space, but this is rarely possible. Consequently, each pattern in the training data will cause the ensemble weight to shift toward the model in the ensemble that is closest to the distribution of the training data. It essentially reduces to an unnecessarily complex method for doing model selection.
The possible weightings for an ensemble can be visualized as lying on a simplex. At each vertex of the simplex, all of the weight is given to a single model in the ensemble. BMA converges toward the vertex that is closest to the distribution of the training data. By contrast, BMC converges toward the point where this distribution projects onto the simplex. In other words, instead of selecting the one model that is closest to the generating distribution, it seeks the combination of models that is closest to the generating distribution.
The results from BMA can often be approximated by using cross-validation to select the best model from a bucket of models. Likewise, the results from BMC may be approximated by using cross-validation to select the best ensemble combination from a random sampling of possible weightings.
A "bucket of models" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set.
The most common approach used for model-selection iscross-validationselection (sometimes called a "bake-off contest"). It is described with the following pseudo-code:
Cross-Validation Selection can be summed up as: "try them all with the training set, and pick the one that works best".[32]
Gating is a generalization of Cross-Validation Selection. It involves training another learning model to decide which of the models in the bucket is best-suited to solve the problem. Often, aperceptronis used for the gating model. It can be used to pick the "best" model, or it can be used to give a linear weight to the predictions from each model in the bucket.
When a bucket of models is used with a large set of problems, it may be desirable to avoid training some of the models that take a long time to train. Landmark learning is a meta-learning approach that seeks to solve this problem. It involves training only the fast (but imprecise) algorithms in the bucket, and then using the performance of these algorithms to help determine which slow (but accurate) algorithm is most likely to do best.[33]
The most common approach for training classifier is usingCross-entropycost function. However, one would like to train an ensemble of models that have diversity so when we combine them it would provide best results.[34][35]Assuming we use a simple ensemble of averagingK{\displaystyle K}classifiers. Then the Amended Cross-Entropy Cost is
whereek{\displaystyle e^{k}}is the cost function of thekth{\displaystyle k^{th}}classifier,qk{\displaystyle q^{k}}is the probability of thekth{\displaystyle k^{th}}classifier,p{\displaystyle p}is the true probability that we need to estimate andλ{\displaystyle \lambda }is a parameter between 0 and 1 that define the diversity that we would like to establish. Whenλ=0{\displaystyle \lambda =0}we want each classifier to do its best regardless of the ensemble and whenλ=1{\displaystyle \lambda =1}we would like the classifier to be as diverse as possible.
Stacking (sometimes calledstacked generalization) involves training a model to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm (final estimator) is trained to make a final prediction using all the predictions of the other algorithms (base estimators) as additional inputs or using cross-validated predictions from the base estimators which can prevent overfitting.[36]If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although, in practice, alogistic regressionmodel is often used as the combiner.
Stacking typically yields performance better than any single one of the trained models.[37]It has been successfully used on both supervised learning tasks (regression,[38]classification and distance learning[39]) and unsupervised learning (density estimation).[40]It has also been used to estimate bagging's error rate.[3][41]It has been reported to out-perform Bayesian model-averaging.[42]The two top-performers in the Netflix competition utilized blending, which may be considered a form of stacking.[43]
Voting is another form of ensembling. See e.g.Weighted majority algorithm (machine learning).
In recent years, due to growing computational power, which allows for training in large ensemble learning in a reasonable time frame, the number of ensemble learning applications has grown increasingly.[49]Some of the applications of ensemble classifiers include:
Land cover mappingis one of the major applications ofEarth observation satellitesensors, usingremote sensingandgeospatial data, to identify the materials and objects which are located on the surface of target areas. Generally, the classes of target materials include roads, buildings, rivers, lakes, and vegetation.[50]Some different ensemble learning approaches based onartificial neural networks,[51]kernel principal component analysis(KPCA),[52]decision treeswithboosting,[53]random forest[50][54]and automatic design of multiple classifier systems,[55]are proposed to efficiently identifyland coverobjects.
Change detectionis animage analysisproblem, consisting of the identification of places where theland coverhas changed over time.Change detectionis widely used in fields such asurban growth,forest and vegetation dynamics,land useanddisaster monitoring.[56]The earliest applications of ensemble classifiers in change detection are designed with the majorityvoting,[57]Bayesian model averaging,[58]and themaximum posterior probability.[59]Given the growth of satellite data over time, the past decade sees more use of time series methods for continuous change detection from image stacks.[60]One example is a Bayesian ensemble changepoint detection method called BEAST, with the software available as a package Rbeast in R, Python, and Matlab.[61]
Distributed denial of serviceis one of the most threateningcyber-attacksthat may happen to aninternet service provider.[49]By combining the output of single classifiers, ensemble classifiers reduce the total error of detecting and discriminating such attacks from legitimateflash crowds.[62]
Classification ofmalwarecodes such ascomputer viruses,computer worms,trojans,ransomwareandspywareswith the usage ofmachine learningtechniques, is inspired by thedocument categorization problem.[63]Ensemble learning systems have shown a proper efficacy in this area.[64][65]
Anintrusion detection systemmonitorscomputer networkorcomputer systemsto identify intruder codes like ananomaly detectionprocess. Ensemble learning successfully aids such monitoring systems to reduce their total error.[66][67]
Face recognition, which recently has become one of the most popular research areas ofpattern recognition, copes with identification or verification of a person by theirdigital images.[68]
Hierarchical ensembles based on Gabor Fisher classifier andindependent component analysispreprocessingtechniques are some of the earliest ensembles employed in this field.[69][70][71]
Whilespeech recognitionis mainly based ondeep learningbecause most of the industry players in this field likeGoogle,MicrosoftandIBMreveal that the core technology of theirspeech recognitionis based on this approach, speech-basedemotion recognitioncan also have a satisfactory performance with ensemble learning.[72][73]
It is also being successfully used infacial emotion recognition.[74][75][76]
Fraud detectiondeals with the identification ofbank fraud, such asmoney laundering,credit card fraudandtelecommunication fraud, which have vast domains of research and applications ofmachine learning. Because ensemble learning improves the robustness of the normal behavior modelling, it has been proposed as an efficient technique to detect such fraudulent cases and activities in banking and credit card systems.[77][78]
The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predictfinancial crisesandfinancial distress.[79]Also, in thetrade-based manipulationproblem, where traders attempt to manipulatestock pricesby buying and selling activities, ensemble classifiers are required to analyze the changes in thestock marketdata and detect suspicious symptom ofstock pricemanipulation.[79]
Ensemble classifiers have been successfully applied inneuroscience,proteomicsandmedical diagnosislike inneuro-cognitive disorder(i.e.Alzheimerormyotonic dystrophy) detection based on MRI datasets,[80][81][82]cervical cytology classification.[83][84]
Besides, ensembles have been successfully applied in medical segmentation tasks, for example brain tumor[85][86]and hyperintensities segmentation.[87]
|
https://en.wikipedia.org/wiki/Ensemble_learning
|
Gradient boostingis amachine learningtechnique based onboostingin a functional space, where the target ispseudo-residualsinstead ofresidualsas in traditional boosting. It gives a prediction model in the form of anensembleof weak prediction models, i.e., models that make very few assumptions about the data, which are typically simpledecision trees.[1][2]When a decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it usually outperformsrandom forest.[1]As with otherboostingmethods, a gradient-boosted trees model is built in stages, but it generalizes the other methods by allowing optimization of an arbitrarydifferentiableloss function.
The idea of gradient boosting originated in the observation byLeo Breimanthat boosting can be interpreted as an optimization algorithm on a suitable cost function.[3]Explicit regression gradient boosting algorithms were subsequently developed, byJerome H. Friedman,[4][2](in 1999 and later in 2001) simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean.[5][6]The latter two papers introduced the view of boosting algorithms as iterativefunctional gradient descentalgorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification.
(This section follows the exposition by Cheng Li.[7])
Like other boosting methods, gradient boosting combines weak "learners" into a single strong learner iteratively. It is easiest to explain in theleast-squaresregressionsetting, where the goal is to teach a modelF{\displaystyle F}to predict values of the formy^=F(x){\displaystyle {\hat {y}}=F(x)}by minimizing themean squared error1n∑i(y^i−yi)2{\displaystyle {\tfrac {1}{n}}\sum _{i}({\hat {y}}_{i}-y_{i})^{2}}, wherei{\displaystyle i}indexes over some training set of sizen{\displaystyle n}of actual values of the output variabley{\displaystyle y}:
If the algorithm hasM{\displaystyle M}stages, at each stagem{\displaystyle m}(1≤m≤M{\displaystyle 1\leq m\leq M}), suppose some imperfect modelFm{\displaystyle F_{m}}(for lowm{\displaystyle m}, this model may simply predicty^i{\displaystyle {\hat {y}}_{i}}to bey¯{\displaystyle {\bar {y}}}, the mean ofy{\displaystyle y}). In order to improveFm{\displaystyle F_{m}}, our algorithm should add some new estimator,hm(x){\displaystyle h_{m}(x)}. Thus,
or, equivalently,
Therefore, gradient boosting will fithm{\displaystyle h_{m}}to theresidualyi−Fm(xi){\displaystyle y_{i}-F_{m}(x_{i})}. As in other boosting variants, eachFm+1{\displaystyle F_{m+1}}attempts to correct the errors of its predecessorFm{\displaystyle F_{m}}. A generalization of this idea toloss functionsother than squared error, and toclassification and ranking problems, follows from the observation that residualshm(xi){\displaystyle h_{m}(x_{i})}for a given model are proportional to the negative gradients of themean squared error (MSE)loss function (with respect toF(xi){\displaystyle F(x_{i})}):
So, gradient boosting could be generalized to agradient descentalgorithm by plugging in a different loss and its gradient.
Manysupervised learningproblems involve an output variableyand a vector of input variablesx, related to each other with some probabilistic distribution. The goal is to find some functionF^(x){\displaystyle {\hat {F}}(x)}that best approximates the output variable from the values of input variables. This is formalized by introducing someloss functionL(y,F(x)){\displaystyle L(y,F(x))}and minimizing it in expectation:
The gradient boosting method assumes a real-valuedy. It seeks an approximationF^(x){\displaystyle {\hat {F}}(x)}in the form of a weighted sum ofMfunctionshm(x){\displaystyle h_{m}(x)}from some classH{\displaystyle {\mathcal {H}}}, called base (orweak) learners:
whereγm{\displaystyle \gamma _{m}}is the weight at stagem{\displaystyle m}. We are usually given a training set{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}of known values ofxand corresponding values ofy. In accordance with theempirical risk minimizationprinciple, the method tries to find an approximationF^(x){\displaystyle {\hat {F}}(x)}that minimizes the average value of the loss function on the training set, i.e., minimizes the empirical risk. It does so by starting with a model, consisting of a constant functionF0(x){\displaystyle F_{0}(x)}, and incrementally expands it in agreedyfashion:
form≥1{\displaystyle m\geq 1}, wherehm∈H{\displaystyle h_{m}\in {\mathcal {H}}}is a base learner function.
Unfortunately, choosing the best functionhm{\displaystyle h_{m}}at each step for an arbitrary loss functionLis a computationally infeasible optimization problem in general. Therefore, we restrict our approach to a simplified version of the problem. The idea is to apply asteepest descentstep to this minimization problem (functional gradient descent). The basic idea is to find a local minimum of the loss function by iterating onFm−1(x){\displaystyle F_{m-1}(x)}. In fact, the local maximum-descent direction of the loss function is the negative gradient.[8]Hence, moving a small amountγ{\displaystyle \gamma }such that the linear approximation remains valid:
Fm(x)=Fm−1(x)−γ∑i=1n∇Fm−1L(yi,Fm−1(xi)){\displaystyle F_{m}(x)=F_{m-1}(x)-\gamma \sum _{i=1}^{n}{\nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}}
whereγ>0{\displaystyle \gamma >0}. For smallγ{\displaystyle \gamma }, this implies thatL(yi,Fm(xi))≤L(yi,Fm−1(xi)){\displaystyle L(y_{i},F_{m}(x_{i}))\leq L(y_{i},F_{m-1}(x_{i}))}.
O=∑i=1nL(yi,Fm−1(xi)+hm(xi)){\displaystyle O=\sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i})+h_{m}(x_{i}))}}
Doing a Taylor expansion around the fixed pointFm−1(xi){\displaystyle F_{m-1}(x_{i})}up to first orderO=∑i=1nL(yi,Fm−1(xi)+hm(xi))≈∑i=1nL(yi,Fm−1(xi))+hm(xi)∇Fm−1L(yi,Fm−1(xi))+…{\displaystyle O=\sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i})+h_{m}(x_{i}))}\approx \sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i}))+h_{m}(x_{i})\nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}+\ldots }
Now differentiating w.r.t tohm(xi){\displaystyle h_{m}(x_{i})}, only the derivative of the second term remains∇Fm−1L(yi,Fm−1(xi)){\displaystyle \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}. This is the direction of steepest ascent and hence we must move in the opposite (i.e., negative) direction in order to move in the direction of steepest descent.
Furthermore, we can optimizeγ{\displaystyle \gamma }by finding theγ{\displaystyle \gamma }value for which the loss function has a minimum:
γm=argminγ∑i=1nL(yi,Fm(xi))=argminγ∑i=1nL(yi,Fm−1(xi)−γ∇Fm−1L(yi,Fm−1(xi))).{\displaystyle \gamma _{m}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L(y_{i},F_{m}(x_{i}))}}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L\left(y_{i},F_{m-1}(x_{i})-\gamma \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))\right)}}.}
If we considered the continuous case, i.e., whereH{\displaystyle {\mathcal {H}}}is the set of arbitrary differentiable functions onR{\displaystyle \mathbb {R} }, we would update the model in accordance with the following equations
whereγm{\displaystyle \gamma _{m}}is the step length, defined asγm=argminγ∑i=1nL(yi,Fm−1(xi)−γ∇Fm−1L(yi,Fm−1(xi))).{\displaystyle \gamma _{m}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L\left(y_{i},F_{m-1}(x_{i})-\gamma \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))\right)}}.}In the discrete case however, i.e. when the setH{\displaystyle {\mathcal {H}}}is finite[clarification needed], we choose the candidate functionhclosest to the gradient ofLfor which the coefficientγmay then be calculated with the aid ofline searchon the above equations. Note that this approach is a heuristic and therefore doesn't yield an exact solution to the given problem, but rather an approximation.
In pseudocode, the generic gradient boosting method is:[4][1]
Input: training set{(xi,yi)}i=1n,{\displaystyle \{(x_{i},y_{i})\}_{i=1}^{n},}a differentiable loss functionL(y,F(x)),{\displaystyle L(y,F(x)),}number of iterationsM.
Algorithm:
Gradient boosting is typically used withdecision trees(especiallyCARTs) of a fixed size as base learners. For this special case, Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner.
Generic gradient boosting at them-th step would fit a decision treehm(x){\displaystyle h_{m}(x)}to pseudo-residuals. LetJm{\displaystyle J_{m}}be the number of its leaves. The tree partitions the input space intoJm{\displaystyle J_{m}}disjoint regionsR1m,…,RJmm{\displaystyle R_{1m},\ldots ,R_{J_{m}m}}and predicts a constant value in each region. Using theindicator notation, the output ofhm(x){\displaystyle h_{m}(x)}for inputxcan be written as the sum:
wherebjm{\displaystyle b_{jm}}is the value predicted in the regionRjm{\displaystyle R_{jm}}.[9]
Then the coefficientsbjm{\displaystyle b_{jm}}are multiplied by some valueγm{\displaystyle \gamma _{m}}, chosen using line search so as to minimize the loss function, and the model is updated as follows:
Friedman proposes to modify this algorithm so that it chooses a separate optimal valueγjm{\displaystyle \gamma _{jm}}for each of the tree's regions, instead of a singleγm{\displaystyle \gamma _{m}}for the whole tree. He calls the modified algorithm "TreeBoost". The coefficientsbjm{\displaystyle b_{jm}}from the tree-fitting procedure can be then simply discarded and the model update rule becomes:
When the lossL(⋅,⋅){\displaystyle L(\cdot ,\cdot )}is mean-squared error (MSE) the coefficientsγjm{\displaystyle \gamma _{jm}}coincide with the coefficients of the tree-fitting procedurebjm{\displaystyle b_{jm}}.
The numberJ{\displaystyle J}of terminal nodes in the trees is a parameter which controls the maximum allowed level ofinteractionbetween variables in the model. WithJ=2{\displaystyle J=2}(decision stumps), no interaction between variables is allowed. WithJ=3{\displaystyle J=3}the model may include effects of the interaction between up to two variables, and so on.J{\displaystyle J}can be adjusted for a data set at hand.
Hastie et al.[1]comment that typically4≤J≤8{\displaystyle 4\leq J\leq 8}work well for boosting and results are fairly insensitive to the choice ofJ{\displaystyle J}in this range,J=2{\displaystyle J=2}is insufficient for many applications, andJ>10{\displaystyle J>10}is unlikely to be required.
Fitting the training set too closely can lead to degradation of the model's generalization ability, that is, its performance on unseen examples. Several so-calledregularizationtechniques reduce thisoverfittingeffect by constraining the fitting procedure.
One natural regularization parameter is the number of gradient boosting iterationsM(i.e. the number of base models). IncreasingMreduces the error on training set, but increases risk of overfitting. An optimal value ofMis often selected by monitoring prediction error on a separate validation data set.
Another regularization parameter for tree boosting is tree depth. The higher this value the more likely the model will overfit the training data.
An important part of gradient boosting is regularization by shrinkage which uses a modified update rule:
whereparameterν{\displaystyle \nu }is called the "learning rate".
Empirically, it has been found that using smalllearning rates(such asν<0.1{\displaystyle \nu <0.1}) yields dramatic improvements in models' generalization ability over gradient boosting without shrinking (ν=1{\displaystyle \nu =1}).[1]However, it comes at the price of increasingcomputational timeboth during training andquerying: lower learning rate requires more iterations.
Soon after the introduction of gradient boosting, Friedman proposed a minor modification to the algorithm, motivated byBreiman'sbootstrap aggregation("bagging") method.[2]Specifically, he proposed that at each iteration of the algorithm, a base learner should be fit on a subsample of the training set drawn at random without replacement.[10]Friedman observed a substantial improvement in gradient boosting's accuracy with this modification.
Subsample size is some constant fractionf{\displaystyle f}of the size of the training set. Whenf=1{\displaystyle f=1}, the algorithm is deterministic and identical to the one described above. Smaller values off{\displaystyle f}introduce randomness into the algorithm and help preventoverfitting, acting as a kind ofregularization. The algorithm also becomes faster, because regression trees have to be fit to smaller datasets at each iteration. Friedman[2]obtained that0.5≤f≤0.8{\displaystyle 0.5\leq f\leq 0.8}leads to good results for small and moderate sized training sets. Therefore,f{\displaystyle f}is typically set to 0.5, meaning that one half of the training set is used to build each base learner.
Also, like in bagging, subsampling allows one to define anout-of-bag errorof the prediction performance improvement by evaluating predictions on those observations which were not used in the building of the next base learner. Out-of-bag estimates help avoid the need for an independent validation dataset, but often underestimate actual performance improvement and the optimal number of iterations.[11][12]
Gradient tree boosting implementations often also use regularization by limiting the minimum number of observations in trees' terminal nodes. It is used in the tree building process by ignoring any splits that lead to nodes containing fewer than this number of training set instances.
Imposing this limit helps to reduce variance in predictions at leaves.
Another useful regularization technique for gradient boosted model is to penalize its complexity.[13]For gradient boosted trees, model complexity can be defined as the proportional[clarification needed]number of leaves in the trees. The joint optimization of loss and model complexity corresponds to a post-pruning algorithm to remove branches that fail to reduce the loss by a threshold.
Other kinds of regularization such as anℓ2{\displaystyle \ell _{2}}penalty on the leaf values can also be used to avoidoverfitting.
Gradient boosting can be used in the field oflearning to rank. The commercial web search enginesYahoo[14]andYandex[15]use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis. At the Large Hadron Collider (LHC), variants of gradient boosting Deep Neural Networks (DNN) were successful in reproducing the results of non-machine learning methods of analysis on datasets used to discover theHiggs boson.[16]Gradient boosting decision tree was also applied in earth and geological studies – for example quality evaluation of sandstone reservoir.[17]
The method goes by a variety of names. Friedman introduced his regression technique as a "Gradient Boosting Machine" (GBM).[4]Mason, Baxter et al. described the generalized abstract class of algorithms as "functional gradient boosting".[5][6]Friedman et al. describe an advancement of gradient boosted models as Multiple Additive Regression Trees (MART);[18]Elith et al. describe that approach as "Boosted Regression Trees" (BRT).[19]
A popular open-source implementation forRcalls it a "Generalized Boosting Model",[11]however packages expanding this work use BRT.[20]Yet another name is TreeNet, after an early commercial implementation from Salford System's Dan Steinberg, one of researchers who pioneered the use of tree-based methods.[21]
Gradient boosting can be used for feature importance ranking, which is usually based on aggregating importance function of the base learners.[22]For example, if a gradient boosted trees algorithm is developed using entropy-baseddecision trees, the ensemble algorithm ranks the importance of features based on entropy as well with the caveat that it is averaged out over all base learners.[22][1]
While boosting can increase the accuracy of a base learner, such as a decision tree or linear regression, it sacrifices intelligibility andinterpretability.[22][23]For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming an XGBoost into a single "born-again" decision tree that approximates the same decision function.[24]Furthermore, its implementation may be more difficult due to the higher computational demand.
|
https://en.wikipedia.org/wiki/Gradient_boosting
|
Nonparametric statisticsis a type of statistical analysis that makes minimal assumptions about the underlyingdistributionof the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as inparametric statistics.[1]Nonparametric statistics can be used fordescriptive statisticsorstatistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.[2]
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others:
The first meaning ofnonparametricinvolves techniques that do not rely on data belonging to any particular parametric family of probability distributions.
These include, among others:
An example isOrder statistics, which are based onordinal rankingof observations.
The discussion following is taken fromKendall's Advanced Theory of Statistics.[3]
Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical.
It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is calledparametric.
Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesisnon-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termeddistribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.
The second meaning ofnon-parametricinvolves techniques that do not assume that thestructureof a model is fixed. Typically, the model grows in size to accommodate the complexity of the data. In these techniques, individual variablesaretypically assumed to belong to parametric distributions, and assumptions about the types of associations among variables are also made. These techniques include, among others:
Non-parametric methods are widely used for studying populations that have a ranked order (such as movie reviews receiving one to five "stars"). The use of non-parametric methods may be necessary when data have arankingbut no clearnumericalinterpretation, such as when assessingpreferences. In terms oflevels of measurement, non-parametric methods result inordinal data.
As non-parametric methods make fewer assumptions, their applicability is much more general than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are morerobust.
Non-parametric methods are sometimes considered simpler to use and more robust than parametric methods, even when the assumptions of parametric methods are justified. This is due to their more general nature, which may make them less susceptible to misuse and misunderstanding. Non-parametric methods can be considered a conservative choice, as they will work even when their assumptions are not met, whereas parametric methods can produce misleading results when their assumptions are violated.
The wider applicability and increasedrobustnessof non-parametric tests comes at a cost: in cases where a parametric test's assumptions are met, non-parametric tests have lessstatistical power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.
Non-parametric modelsdiffer fromparametricmodels in that the model structure is not specifieda prioribut is instead determined from data. The termnon-parametricis not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
Non-parametric(ordistribution-free)inferential statistical methodsare mathematical procedures for statistical hypothesis testing which, unlikeparametric statistics, make no assumptions about theprobability distributionsof the variables being assessed. The most frequently used tests include
Early nonparametric statistics include themedian(13th century or earlier, use in estimation byEdward Wright, 1599; seeMedian § History) and thesign testbyJohn Arbuthnot(1710) in analyzing thehuman sex ratioat birth (seeSign test § History).[5][6]
|
https://en.wikipedia.org/wiki/Non-parametric_statistics
|
Arandomized algorithmis analgorithmthat employs a degree ofrandomnessas part of its logic or procedure. The algorithm typically usesuniformly randombits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for exampleQuicksort[1]), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for theMFASproblem[2]) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.[3]
In common practice, randomized algorithms are approximated using apseudorandom number generatorin place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
As a motivating example, consider the problem of finding an ‘a’ in anarrayofnelements.
Input: An array ofn≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
Output: Find an ‘a’ in the array.
We give two versions of the algorithm, oneLas Vegas algorithmand oneMonte Carlo algorithm.
Las Vegas algorithm:
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
Since it is constant, the expected run time over many calls isΘ(1){\displaystyle \Theta (1)}. (SeeBig Theta notation)
Monte Carlo algorithm:
If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. Afterkiterations, the probability of finding an ‘a’ is:
Pr[finda]=1−(1/2)k{\displaystyle \Pr[\mathrm {find~a} ]=1-(1/2)^{k}}
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) isΘ(1){\displaystyle \Theta (1)}.
Randomized algorithms are particularly useful when faced with a malicious "adversary" orattackerwho deliberately tries to feed a bad input to the algorithm (seeworst-case complexityandcompetitive analysis (online algorithm)) such as in thePrisoner's dilemma. It is for this reason thatrandomnessis ubiquitous incryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or acryptographically secure pseudo-random number generatoris required. Another area in which randomness is inherent isquantum computing.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to theMonte Carlo methodfor simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameterk, but allows asmall probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (viaMarkov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
Computational complexity theorymodels randomized algorithms asprobabilistic Turing machines. BothLas VegasandMonte Carlo algorithmsare considered, and severalcomplexity classesare studied. The most basic randomized complexity class isRP, which is the class ofdecision problemsfor which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms withpolynomial timeaverage case running time whose output is always correct are said to be inZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is calledBPP. This class acts as the randomized equivalent ofP, i.e. BPP represents the class of efficient randomized algorithms.
Quicksortwas discovered byTony Hoarein 1959, and subsequently published in 1961.[4]In the same year, Hoare published thequickselect algorithm,[5]which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.[6]
In 1917,Henry Cabourn Pocklingtonintroduced a randomized algorithm known asPocklington's algorithmfor efficiently findingsquare rootsmodulo prime numbers.[7]In 1970,Elwyn Berlekampintroduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field.[8]In 1977,Robert M. SolovayandVolker Strassendiscovered a polynomial-timerandomized primality test(i.e., determining theprimalityof a number). Soon afterwardsMichael O. Rabindemonstrated that the 1976Miller's primality testcould also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-timedeterministic algorithmsfor primality testing were known.
One of the earliest randomized data structures is thehash table, which was introduced in 1953 byHans Peter LuhnatIBM.[9]Luhn's hash table used chaining to resolve collisions and was also one of the first applications oflinked lists.[9]Subsequently, in 1954,Gene Amdahl,Elaine M. McGraw,Nathaniel Rochester, andArthur SamuelofIBM Researchintroducedlinear probing,[9]althoughAndrey Ershovindependently had the same idea in 1957.[9]In 1962,Donald Knuthperformed the first correct analysis of linear probing,[9]although the memorandum containing his analysis was not published until much later.[10]The first published analysis was due to Konheim and Weiss in 1966.[11]
Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random.[9]In 1979, Carter and Wegman introduceduniversal hash functions,[12]which they showed could be used to implement chained hash tables with constant expected time per operation.
Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as theBloom filter.[13]In 1989,Raimund SeidelandCecilia R. Aragonintroduced a randomized balanced search tree known as thetreap.[14]In the same year,William Pughintroduced another randomized search tree known as theskip list.[15]
Prior to the popularization of randomized algorithms in computer science,Paul Erdőspopularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as theprobabilistic method.[16]Erdősgave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs.[17]He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.[18][16]
Quicksortis a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm requireO(n2) time to sortnnumbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing inO(nlogn) time regardless of the characteristics of the input.
Incomputational geometry, a standard technique to build a structure like aconvex hullorDelaunay triangulationis to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known asrandomized incremental construction.[19]
Input: AgraphG(V,E)
Output: Acutpartitioning the vertices intoLandR, with the minimum number of edges betweenLandR.
Recall that thecontractionof two nodes,uandv, in a (multi-)graph yields a new nodeu' with edges that are the union of the edges incident on eitheruorv, except from any edge(s) connectinguandv. Figure 1 gives an example of contraction of vertexAandB.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's[20]basic algorithm:
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution isO(n){\displaystyle O(n)}, andndenotes the number of vertices.
Aftermtimes executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
Lemma 1—Letkbe the min cut size, and letC= {e1,e2, ...,ek}be the min cut. If, during iterationi, no edgee∈Cis selected for contraction, thenCi=C.
IfGis not connected, thenGcan be partitioned intoLandRwithout any edge between them. So the min cut in a disconnected graph is 0. Now, assumeGis connected. LetV=L∪Rbe the partition ofVinduced byC:C= { {u,v} ∈E:u∈L,v∈R} (well-defined sinceGis connected). Consider an edge {u,v} ofC. Initially,u,vare distinct vertices.As long as we pick an edgef≠e{\displaystyle f\neq e},uandvdo not get merged.Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices ofLand the other consisting of the vertices ofR. As in figure 2, the size of min cut is 1, andC= {(A,B)}. If we don't select (A,B) for contraction, we can get the min cut.
Lemma 2—IfGis a multigraph withpvertices and whose min cut has sizek, thenGhas at leastpk/2 edges.
Because the min cut isk, every vertexvmust satisfy degree(v) ≥k. Therefore, the sum of the degree is at leastpk. But it is well known that the sum of vertex degrees equals 2|E|. The lemma follows.
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is∏i=1mPr(Ci≠C)=∏i=1m(1−Pr(Ci=C)).{\displaystyle \prod _{i=1}^{m}\Pr(C_{i}\neq C)=\prod _{i=1}^{m}(1-\Pr(C_{i}=C)).}
By lemma 1, the probability thatCi=Cis the probability that no edge ofCis selected during iterationi. Consider the inner loop and letGjdenote the graph afterjedge contractions, wherej∈ {0, 1, …,n− 3}.Gjhasn−jvertices. We use the chain rule ofconditional possibilities.
The probability that the edge chosen at iterationjis not inC, given that no edge ofChas been chosen before, is1−k|E(Gj)|{\displaystyle 1-{\frac {k}{|E(G_{j})|}}}. Note thatGjstill has min cut of sizek, so by Lemma 2, it still has at least(n−j)k2{\displaystyle {\frac {(n-j)k}{2}}}edges.
Thus,1−k|E(Gj)|≥1−2n−j=n−j−2n−j{\displaystyle 1-{\frac {k}{|E(G_{j})|}}\geq 1-{\frac {2}{n-j}}={\frac {n-j-2}{n-j}}}.
So by the chain rule, the probability of finding the min cutCisPr[Ci=C]≥(n−2n)(n−3n−1)(n−4n−2)…(35)(24)(13).{\displaystyle \Pr[C_{i}=C]\geq \left({\frac {n-2}{n}}\right)\left({\frac {n-3}{n-1}}\right)\left({\frac {n-4}{n-2}}\right)\ldots \left({\frac {3}{5}}\right)\left({\frac {2}{4}}\right)\left({\frac {1}{3}}\right).}
Cancellation givesPr[Ci=C]≥2n(n−1){\displaystyle \Pr[C_{i}=C]\geq {\frac {2}{n(n-1)}}}. Thus the probability that the algorithm succeeds is at least1−(1−2n(n−1))m{\displaystyle 1-\left(1-{\frac {2}{n(n-1)}}\right)^{m}}. Form=n(n−1)2lnn{\displaystyle m={\frac {n(n-1)}{2}}\ln n}, this is equivalent to1−1n{\displaystyle 1-{\frac {1}{n}}}. The algorithm finds the min cut with probability1−1n{\displaystyle 1-{\frac {1}{n}}}, in timeO(mn)=O(n3logn){\displaystyle O(mn)=O(n^{3}\log n)}.
Randomness can be viewed as a resource, like space and time. Derandomization is then the process ofremovingrandomness (or using as little of it as possible).[21][22]It is not currently known[as of?]if all algorithms can be derandomized without significantly increasing their running time.[23]For instance, incomputational complexity, it is unknown whetherP=BPP,[23]i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
When the model of computation is restricted toTuring machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
|
https://en.wikipedia.org/wiki/Randomized_algorithm
|
Embeddinginmachine learningrefers to arepresentation learningtechnique that maps complex, high-dimensional data into a lower-dimensionalvector spaceof numerical vectors.[1]It also denotes the resulting representation, where meaningful patterns or relationships are preserved. As a technique, it learns these vectors from data like words, images, or user interactions, differing from manually designed methods such asone-hot encoding.[2]This process reduces complexity and captures key features without needing prior knowledge of the problem area (domain).
For example, innatural language processing(NLP), it might represent "cat" as [0.2, -0.4, 0.7], "dog" as [0.3, -0.5, 0.6], and "car" as [0.8, 0.1, -0.2], placing "cat" and "dog" close together in the space—reflecting their similarity—while "car" is farther away. The resulting embeddings vary by type, includingword embeddingsfor text (e.g.,Word2Vec),image embeddingsfor visual data, andknowledge graph embeddingsforknowledge graphs, each tailored to tasks like NLP,computer vision, orrecommendation systems.[3]This dual role enhances model efficiency and accuracy by automating feature extraction and revealing latent similarities across diverse applications.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Embedding_(machine_learning)
|
Brown clusteringis a hardhierarchical agglomerative clusteringproblem based on distributional information proposed byPeter Brown, William A. Brown, Vincent Della Pietra,Peter V. de Souza, Jennifer Lai, andRobert Mercer.[1]The method, which is based on bigram language models,[2]is typically applied to text, grouping words into clusters that are assumed to be semantically related by virtue of their having been embedded in similar contexts.
Innatural language processing,Brown clustering[3]orIBM clustering[4]is a form ofhierarchical clusteringof words based on the contexts in which they occur, proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter de Souza, Jennifer Lai, andRobert MercerofIBMin the context oflanguage modeling.[1]The intuition behind the method is that aclass-based language model(also calledclustern-gram model[4]), i.e. one where probabilities of words are based on the classes (clusters) of previous words, is used to address thedata sparsityproblem inherent in language modeling. The method has been successfully used to improve parsing, domain adaptation, and named entity recognition.[5]
Jurafskyand Martin give the example of aflight reservation systemthat needs to estimate thelikelihoodof the bigram "to Shanghai", without having seen this in a training set.[4]The system can obtain a good estimate if it can cluster "Shanghai" with other city names, then make its estimate based on the likelihood of phrases such as "to London", "to Beijing" and "to Denver".
Brown groups items (i.e.,types) into classes, using a binary merging criterion based on thelog-probabilityof a text under a class-based language model, i.e. a probability model that takes the clustering into account. Thus, averagemutual information(AMI) is the optimization function, and merges are chosen such that they incur the least loss in globalmutual information.
As a result, the output can be thought of not only as abinary tree[6]but perhaps more helpfully as a sequence of merges, terminating with one big class of all words. This model has the same general form as ahidden Markov model, reduced to bigram probabilities in Brown's solution to the problem.
MI is defined as:
Finding the clustering that maximizes the likelihood of the data is computationally expensive.
The approach proposed by Brown et al. is agreedy heuristic.
The work also suggests use of Brown clusterings as a simplistic bigram class-based language model. Given cluster membership indicatorscifor the tokenswiin a text, the probability of the word instancewigiven preceding wordwi-1is given by:[4]
This has been criticised[citation needed]as being of limited utility, as it only ever predicts the most common word in any class, and so is restricted to|c|word types; this is reflected in the low relative reduction in perplexity found when using this model and Brown.
When applied toTwitterdata, for example, Brown clustering assigned a binary tree path to each word in unlabelled tweets during clustering.[7]The prefixes to these paths are used as new features for the tagger.[7]
Brown clustering has also been explored using trigrams.[8]
Brown clustering as proposed generates a fixed number of output classes. It is important to choose the correct number of classes, which is task-dependent.[9]The cluster memberships of words resulting from Brown clustering can be used as features in a variety ofmachine-learnednatural language processing tasks.[3]
A generalization of the algorithm was published in the AAAI conference in 2016, including a succinct formal definition of the 1992 version and then also the general form.[10]Core to this is the concept that the classes considered for merging do not necessarily represent the final number of classes output, and that altering the number of classes considered for merging directly affects the speed and quality of the final result.
There are no known theoretical guarantees on the greedy heuristic proposed by Brown et al. (as of February 2018). However, the clustering problem can be framed as estimating the parameters of the underlying class-based language model: it is possible to develop a consistent estimator for this model under mild assumptions.[11]
|
https://en.wikipedia.org/wiki/Brown_clustering
|
Incomputer science, asuffix tree(also calledPAT treeor, in an earlier form,position tree) is a compressedtriecontaining all thesuffixesof the given text as their keys and positions in the text as their values. Suffix trees allow particularly fast implementations of many important string operations.
The construction of such a tree for the stringS{\displaystyle S}takes time and space linear in the length ofS{\displaystyle S}. Once constructed, several operations can be performed quickly, such as locating asubstringinS{\displaystyle S}, locating a substring if a certain number of mistakes are allowed, and locating matches for aregular expressionpattern. Suffix trees also provided one of the first linear-time solutions for thelongest common substring problem.[2]These speedups come at a cost: storing a string's suffix tree typically requires significantly more space than storing the string itself.
The concept was first introduced byWeiner (1973).
Rather than the suffixS[i..n]{\displaystyle S[i..n]}, Weiner stored in his trie[3]theprefix identifierfor each position, that is, the shortest string starting ati{\displaystyle i}and occurring only once inS{\displaystyle S}. HisAlgorithm Dtakes an uncompressed[4]trie forS[k+1..n]{\displaystyle S[k+1..n]}and extends it into a trie forS[k..n]{\displaystyle S[k..n]}. This way, starting from the trivial trie forS[n..n]{\displaystyle S[n..n]}, a trie forS[1..n]{\displaystyle S[1..n]}can be built byn−1{\displaystyle n-1}successive calls to Algorithm D; however, the overall run time isO(n2){\displaystyle O(n^{2})}. Weiner'sAlgorithm Bmaintains several auxiliary data structures, to achieve an overall run time linear in the size of the constructed trie. The latter can still beO(n2){\displaystyle O(n^{2})}nodes, e.g. forS=anbnanbn$.{\displaystyle S=a^{n}b^{n}a^{n}b^{n}\$.}Weiner'sAlgorithm Cfinally uses compressed tries to achieve linear overall storage size and run time.[5]Donald Knuthsubsequently characterized the latter as "Algorithm of the Year 1973" according to his studentVaughan Pratt.[original research?][6]The text bookAho, Hopcroft & Ullman (1974, Sect.9.5) reproduced Weiner's results in a simplified and more elegant form, introducing the termposition tree.
McCreight (1976)was the first to build a (compressed) trie of all suffixes ofS{\displaystyle S}. Although the suffix starting ati{\displaystyle i}is usually longer than the prefix identifier, their path representations in a compressed trie do not differ in size. On the other hand, McCreight could dispense with most of Weiner's auxiliary data structures; only suffix links remained.
Ukkonen (1995)further simplified the construction.[6]He provided the first online-construction of suffix trees, now known asUkkonen's algorithm, with running time that matched the then fastest algorithms.
These algorithms are all linear-time for a constant-size alphabet, and have worst-case running time ofO(nlogn){\displaystyle O(n\log n)}in general.
Farach (1997)gave the first suffix tree construction algorithm that is optimal for all alphabets. In particular, this is the first linear-time algorithm
for strings drawn from an alphabet of integers in a polynomial range. Farach's algorithm has become the basis for new algorithms for constructing both suffix trees andsuffix arrays, for example, in external memory, compressed, succinct, etc.
The suffix tree for the stringS{\displaystyle S}of lengthn{\displaystyle n}is defined as a tree such that:[7]
If a suffix ofS{\displaystyle S}is also the prefix of another suffix, such a tree does not exist for the string. For example, in the stringabcbc, the suffixbcis also a prefix of the suffixbcbc. In such a case, the path spelling outbcwill not end in a leaf, violating the fifth rule. To fix this problem,S{\displaystyle S}is padded with a terminal symbol not seen in the string (usually denoted$). This ensures that no suffix is a prefix of another, and that there will ben{\displaystyle n}leaf nodes, one for each of then{\displaystyle n}suffixes ofS{\displaystyle S}.[8]Since all internal non-root nodes are branching, there can be at mostn−1{\displaystyle n-1}such nodes, andn+(n−1)+1=2n{\displaystyle n+(n-1)+1=2n}nodes in total (n{\displaystyle n}leaves,n−1{\displaystyle n-1}internal non-root nodes, 1 root).
Suffix linksare a key feature for older linear-time construction algorithms, although most newer algorithms, which are based onFarach's algorithm, dispense with suffix links. In a complete suffix tree, all internal non-root nodes have a suffix link to another internal node. If the path from the root to a node spells the stringχα{\displaystyle \chi \alpha }, whereχ{\displaystyle \chi }is a single character andα{\displaystyle \alpha }is a string (possibly empty), it has a suffix link to the internal node representingα{\displaystyle \alpha }. See for example the suffix link from the node forANAto the node forNAin the figure above. Suffix links are also used in some algorithms running on the tree.
Ageneralized suffix treeis a suffix tree made for a set of strings instead of a single string. It represents all suffixes from this set of strings. Each string must be terminated by a different termination symbol.
A suffix tree for a stringS{\displaystyle S}of lengthn{\displaystyle n}can be built inΘ(n){\displaystyle \Theta (n)}time, if the letters come from an alphabet of integers in a polynomial range (in particular, this is true for constant-sized alphabets).[9]For larger alphabets, the running time is dominated by firstsortingthe letters to bring them into a range of sizeO(n){\displaystyle O(n)}; in general, this takesO(nlogn){\displaystyle O(n\log n)}time.
The costs below are given under the assumption that the alphabet is constant.
Assume that a suffix tree has been built for the stringS{\displaystyle S}of lengthn{\displaystyle n}, or that ageneralised suffix treehas been built for the set of stringsD={S1,S2,…,SK}{\displaystyle D=\{S_{1},S_{2},\dots ,S_{K}\}}of total lengthn=n1+n2+⋯+nK{\displaystyle n=n_{1}+n_{2}+\cdots +n_{K}}.
You can:
The suffix tree can be prepared for constant timelowest common ancestorretrieval between nodes inΘ(n){\displaystyle \Theta (n)}time.[17]One can then also:
Suffix trees can be used to solve a large number of string problems that occur in text-editing, free-text search,computational biologyand other application areas.[25]Primary applications include:[25]
Suffix trees are often used inbioinformaticsapplications, searching for patterns inDNAorproteinsequences (which can be viewed as long strings of characters). The ability to search efficiently with mismatches might be considered their greatest strength. Suffix trees are also used indata compression; they can be used to find repeated data, and can be used for the sorting stage of theBurrows–Wheeler transform. Variants of theLZWcompression schemes use suffix trees (LZSS). A suffix tree is also used insuffix tree clustering, adata clusteringalgorithm used in some search engines.[26]
If each node and edge can be represented inΘ(1){\displaystyle \Theta (1)}space, the entire tree can be represented inΘ(n){\displaystyle \Theta (n)}space. The total length of all the strings on all of the edges in the tree isO(n2){\displaystyle O(n^{2})}, but each edge can be stored as the position and length of a substring ofS, giving a total space usage ofΘ(n){\displaystyle \Theta (n)}computer words. The worst-case space usage of a suffix tree is seen with afibonacci word, giving the full2n{\displaystyle 2n}nodes.
An important choice when making a suffix tree implementation is the parent-child relationships between nodes. The most common is usinglinked listscalledsibling lists. Each node has a pointer to its first child, and to the next node in the child list it is a part of. Other implementations with efficient running time properties usehash maps, sorted or unsortedarrays(witharray doubling), orbalanced search trees. We are interested in:
Letσbe the size of the alphabet. Then you have the following costs:[citation needed]
The insertion cost is amortised, and that the costs for hashing are given for perfect hashing.
The large amount of information in each edge and node makes the suffix tree very expensive, consuming about 10 to 20 times the memory size of the source text in good implementations. Thesuffix arrayreduces this requirement to a factor of 8 (for array includingLCPvalues built within 32-bit address space and 8-bit characters.) This factor depends on the properties and may reach 2 with usage of 4-byte wide characters (needed to contain any symbol in someUNIX-likesystems, seewchar_t) on 32-bit systems.[citation needed]Researchers have continued to find smaller indexing structures.
Various parallel algorithms to speed up suffix tree construction have been proposed.[27][28][29][30][31]Recently, a practical parallel algorithm for suffix tree construction withO(n){\displaystyle O(n)}work(sequential time) andO(log2n){\displaystyle O(\log ^{2}n)}spanhas been developed. The algorithm achieves good parallel scalability on shared-memory multicore machines and can index thehuman genome– approximately 3GB– in under 3 minutes using a 40-core machine.[32]
Though linear, the memory usage of a suffix tree is significantly higher
than the actual size of the sequence collection. For a large text,
construction may require external memory approaches.
There are theoretical results for constructing suffix trees in external
memory.
The algorithm byFarach-Colton, Ferragina & Muthukrishnan (2000)is theoretically optimal, with an I/O complexity equal to that of sorting.
However the overall intricacy of this algorithm has prevented, so far, its
practical implementation.[33]
On the other hand, there have been practical works for constructing
disk-based suffix trees
which scale to (few) GB/hours.
The state of the art methods are TDD,[34]TRELLIS,[35]DiGeST,[36]and
B2ST.[37]
TDD and TRELLIS scale up to the entire human genome resulting in a disk-based suffix tree of a size in the tens of gigabytes.[34][35]However, these methods cannot handle efficiently collections of sequences exceeding 3 GB.[36]DiGeST performs significantly better and is able to handle collections of sequences in the order of 6 GB in about 6 hours.[36]
All these methods can efficiently build suffix trees for the case when the
tree does not fit in main memory,
but the input does.
The most recent method, B2ST,[37]scales to handle
inputs that do not fit in main memory. ERA is a recent parallel suffix tree construction method that is significantly faster. ERA can index the entire human genome in 19 minutes on an 8-core desktop computer with 16 GB RAM. On a simple Linux cluster with 16 nodes (4 GB RAM per node), ERA can index the entire human genome in less than 9 minutes.[38]
|
https://en.wikipedia.org/wiki/Suffix_tree
|
Incomputer science,hash triecan refer to:
|
https://en.wikipedia.org/wiki/Hash_trie
|
Aprefix hash tree(PHT) is a distributeddata structurethat enables more sophisticated queries over adistributed hash table(DHT).[citation needed]The prefix hash tree uses the lookup interface of a DHT to construct atrie-based data structure that is both efficient (updates are doubly logarithmic in the size of the domain being indexed), and resilient (the failure of any given node in a prefix hash tree does not affect the availability of data stored at other nodes).[1][2]
Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Prefix_hash_tree
|
Aconcurrent hash-trieorCtrie[1][2]is a concurrentthread-safelock-freeimplementation of ahash array mapped trie. It is used to implement the concurrent map abstraction. It has particularly scalable concurrent insert and remove operations and is memory-efficient.[3]It is the first known concurrent data-structure that supportsO(1), atomic,lock-freesnapshots.[2][4]
The Ctrie data structure is a non-blocking concurrenthash array mapped triebased on single-word compare-and-swap instructions in a shared-memory system. It supports concurrent lookup, insert and remove operations. Just like thehash array mapped trie, it uses the entire 32-bit space for hash values thus having low risk of hashcode collisions. Each node may have up to 32 sub-nodes, but to conserve memory, the list of sub-nodes is represented by a 32-bit bitmap where each bit indicates the presence of a branch, followed by a non-sparse array (of pointers to sub-nodes) whose length equals theHamming weightof the bitmap.
Keys are inserted by doing an atomic compare-and-swap operation on the node which needs to be modified. To ensure that updates are done independently and in a proper order, a special indirection node (an I-node) is inserted between each regular node and its subtries[check spelling].
The figure above illustrates the Ctrie insert operation. Trie A is empty - an atomic CAS instruction is used to swap the old node C1 with the new version of C1 which has the new keyk1. If the CAS is not successful, the operation is restarted. If the CAS is successful, we obtain the trie B. This procedure is repeated when a new keyk2is added (trie C). If two hashcodes of the keys in the Ctrie collide as is the case withk2andk3, the Ctrie must be extended with at least one more level - trie D has a new indirection node I2 with a new node C2 which holds both colliding keys. Further CAS instructions are done on the contents of the indirection nodes I1 and I2 - such CAS instructions can be done independently of each other, thus enabling concurrent updates with less contention.
The Ctrie is defined by the pointer to the root indirection node (or a root I-node). The following types of nodes are defined for the Ctrie:
A C-node is a branching node. It typically contains up to 32 branches, soWabove is 5. Each branch may either be a key-value pair (represented with an S-node) or another I-node. To avoid wasting 32 entries in the branching array when some branches may be empty, an integer bitmap is used to denote which bits are full and which are empty. The helper methodflagposis used to inspect the relevant hashcode bits for a given level and extract the value of the bit in the bitmap to see if its set or not - denoting whether there is a branch at that position or not. If there is a bit, it also computes its position in the branch array. The formula used to do this is:
Note that the operations treat only the I-nodes as mutable nodes - all other nodes are never changed after being created and added to the Ctrie.
Below is an illustration of thepseudocodeof the insert operation:
Theinsertedandupdatedmethods on nodes return new versions of the C-node with a value inserted or updated at the specified position, respectively. Note that the insert operation above istail-recursive, so it can be rewritten as awhile loop. Other operations are described in more detail in the original paper on Ctries.[1][5]
The data-structure has been proven to be correct[1]- Ctrie operations have been shown to have the atomicity, linearizability and lock-freedom properties. The lookup operation can be modified to guaranteewait-freedom.
Ctries have been shown to be comparable in performance with concurrentskip lists,[2][4]concurrenthash tablesand similar data structures in terms of the lookup operation, being slightly slower than hash tables and faster than skip lists due to the lower level of indirections. However, they are far more scalable than most concurrent hash tables where the insertions are concerned.[1]Most concurrent hash tables are bad at conserving memory - when the keys are removed from the hash table, the underlying array is not shrunk. Ctries have the property that the allocated memory is always a function of only the current number of keys in the data-structure.[1]
Ctries have logarithmic complexity bounds of the basic operations, albeit with a low constant factor due to the high branching level (usually 32).
Ctries support a lock-free, linearizable, constant-time snapshot operation,[2]based on the insight obtained frompersistent data structures. This is a breakthrough in concurrent data-structure design, since existing concurrent data-structures do not support snapshots. The snapshot operation allows implementing lock-free, linearizable iterator, size and clear operations - existing concurrent data-structures have implementations which either use global locks or are correct only given that there are no concurrent modifications to the data-structure. In particular, Ctries have an O(1) iterator creation operation, O(1) clear operation, O(1) duplicate operation and anamortizedO(logn) size retrieval operation.
Most concurrent data structures require dynamic memory allocation, andlock-freeconcurrent data structures rely on garbage collection on most platforms. The current implementation[4]of the Ctrie is written for the JVM, where garbage collection is provided by the platform itself. While it's possible to keep a concurrent memory pool for the nodes shared by all instances of Ctries in an application or use reference counting to properly deallocate nodes, the only implementation so far to deal with manual memory management of nodes used in Ctries is the common-lisp implementationcl-ctrie, which implements several stop-and-copy and mark-and-sweep garbage collection techniques for persistent, memory-mapped storage.Hazard pointersare another possible solution for a correct manual management of removed nodes. Such a technique may be viable for managed environments as well, since it could lower the pressure on the GC. A Ctrie implementation in Rust makes use of hazard pointers for this purpose.[6]
A Ctrie implementation[4]for Scala 2.9.x is available on GitHub. It is a mutable thread-safe implementation which ensures progress and supports lock-free, linearizable, O(1) snapshots.
Ctries were first described in 2011 byAleksandar Prokopec.[1]To quote the author:
Ctrie is a non-blocking concurrent shared-memory hash trie based on single-word compare-and-swap instructions. Insert, lookup and remove operations modifying different parts of the hash trie can be run independent of each other and do not contend. Remove operations ensure that the unneeded memory is freed and that the trie is kept compact.
In 2012, a revised version of the Ctrie data structure was published,[2]simplifying the data structure and introducing an optional constant-time, lock-free, atomic snapshot operation.
In 2018, the closely related Cache-Trie data structure was proposed,[16]which augmented Ctries with an auxiliary, quiescently consistent cache data structure. This "cache" is a hash-table-like entity that makes a best effort to "guess" the appropriate node on a deeper level of the trie, and is maintained in a way such that it's as close as possible to the level where most of the trie's elements are. Cache tries were shown to have amortized expected O(1) complexity of all the basic operations.
|
https://en.wikipedia.org/wiki/Ctrie
|
TheHAT-trieis a type ofradix triethat uses array nodes to collect individualkey–value pairsunder radix nodes and hash buckets into anassociative array. Unlike a simplehash table, HAT-tries store key–value in an ordered collection. The original inventors are Nikolas Askitis and Ranjan Sinha.[1][2]Askitis & Zobel showed that building and accessing the HAT-trie key/value collection is considerably faster than other sorted access methods and is comparable to the array hash which is an unsorted collection.[3]This is due to the cache-friendly nature of the data structure which attempts to group access to data in time and space into the 64 bytecache line sizeof the modern CPU.
A new HAT-trie starts out as a NULL pointer representing an empty node. The first added key allocates the smallest array node and copies into it the key/value pair, which becomes the first root of the trie. Each subsequent key/value pair is added to the initial array node until a maximum size is reached, after which the node is burst by re-distributing its keys into a hash bucket with new underlying array nodes, one for each occupied hash slot in the bucket. The hash bucket becomes the new root of the trie. The key strings are stored in the array nodes with a length encoding byte prefixed to the key value bytes. The value associated with each key can be stored either in-line alternating with the key strings, or placed in a second array, e.g., memory immediately after and joined to the array node.[4]
Once the trie has grown into its first hash bucket node, the hash bucket distributes new keys according to ahash functionof the key value into array nodes contained underneath the bucket node. Keys continue to be added until a maximum number of keys for a particular hash bucket node is reached. The bucket contents are then re-distributed into a new radix node according to the stored key value's first character, which replaces the hash bucket node as the trie root[5](e.g. seeBurstsort[6]). The existing keys and values contained in the hash bucket are each shortened by one character and placed under the new radix node in a set of new array nodes.
Sorted access to the collection is provided by enumerating keys into a cursor by branching down the radix trie to assemble the leading characters, ending at either a hash bucket or an array node. Pointers to the keys contained in the hash bucket or array node are assembled into an array that is part of the cursor for sorting. Since there is a maximum number of keys in a hash bucket or array node, there is a pre-set fixed limit to the size of the cursor at all points in time. After the keys for the hash bucket or array node are exhausted by get-next (or get-previous) (seeIterator) the cursor is moved into the next radix node entry and the process repeats.[7]
|
https://en.wikipedia.org/wiki/HAT-trie
|
Incomputer science, theAho–Corasick algorithmis astring-searching algorithminvented byAlfred V. Ahoand Margaret J. Corasick in 1975.[1]It is a kind of dictionary-matchingalgorithmthat locates elements of a finite set of strings (the "dictionary") within an input text. It matches all strings simultaneously. Thecomplexityof the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. Because all matches are found, multiple matches will be returned for one string location if multiple strings from the dictionary match at that location (e.g. dictionary =a,aa,aaa,aaaaand input string isaaaa).
Informally, the algorithm constructs afinite-state machinethat resembles atriewith additional links between the various internal nodes. These extra internal links allow fast transitions between failed string matches (e.g. a search forcartin a trie that does not containcart, but containsart, and thus would fail at the node prefixed bycar), to other branches of the trie that share a common suffix (e.g., in the previous case, a branch forattributemight be the best lateral transition). This allows the automaton to transition between string matches without the need for backtracking.
When the string dictionary is known in advance (e.g. acomputer virusdatabase), the construction of the automaton can be performed once off-line and the compiled automaton stored for later use. In this case, its run time is linear in the length of the input plus the number of matched entries.
The Aho—Corasick string-matching algorithm formed the basis of the originalUnix commandfgrep.
Like many inventions at Bell Labs at the time, the Aho–Corasick algorithm was created serendipitously with a conversation between the two after a seminar by Aho. Corasick was an information scientist who got her PhD a year earlier at Lehigh University. There, she did her dissertation on securing propretiary data within open systems, through the lens of both the commercial, legal, and government structures and the technical tools that were emerging at the time.[2]In a similar realm, at Bell Labs, she was building a tool for researchers to learn about current work being done under government contractors by searching government-provided tapes of publications.
For this, she wrote a primitive keyword-by-keyword search program to find chosen keywords within the tapes. Such an algorithm scaled poorly with many keywords, and one of the bibliographers using her algorithm hit the $600 usage limit on the Bell Labs machines before their lengthy search even finished.
She ended up attending a seminar on algorithm design by Aho, and afterwards they got to speaking about her work and this problem. Aho suggested improving the efficiency of the program using the approach of the now Aho–Corasick algorithm, and Corasick designed a new program based on those insights. This lowered the running cost of that bibliographer's search from over $600 to just $25, and Aho–Corasick was born.[3]
In this example, we will consider a dictionary consisting of the following words: {a, ab, bab, bc, bca, c, caa}.
The graph below is the Aho–Corasick data structure constructed from the specified dictionary, with each row in the table representing a node in the trie, with the column path indicating the (unique) sequence of characters from the root to the node.
The data structure has one node for every prefix of every string in the dictionary. So if (bca) is in the dictionary, then there will be nodes for (bca), (bc), (b), and (). If a node is in the dictionary then it is a blue node. Otherwise it is a grey node.
There is a black directed "child" arc from each node to a node whose name is found by appending one character. So there is a black arc from (bc) to (bca).
There is a blue directed "suffix" arc from each node to the node that is the longest possible strict suffix of it in the graph. For example, for node (caa), its strict suffixes are (aa) and (a) and (). The longest of these that exists in the graph is (a). So there is a blue arc from (caa) to (a). The blue arcs can be computed in linear time by performing abreadth-first search[potential suffix node will always be at lower level] starting from the root. The target for the blue arc of a visited node can be found by following its parent's blue arc to its longest suffix node and searching for a child of the suffix node whose character matches that of the visited node. If the character does not exist as a child, we can find the next longest suffix (following the blue arc again) and then search for the character. We can do this until we either find the character (as child of a node) or we reach the root (which will always be a suffix of every string).
There is a green "dictionary suffix" arc from each node to the next node in the dictionary that can be reached by following blue arcs. For example, there is a green arc from (bca) to (a) because (a) is the first node in the dictionary (i.e. a blue node) that is reached when following the blue arcs to (ca) and then on to (a). The green arcs can be computed in linear time by repeatedly traversing blue arcs until a blue node is found, andmemoizingthis information.
At each step, the current node is extended by finding its child, and if that doesn't exist, finding its suffix's child, and if that doesn't work, finding its suffix's suffix's child, and so on, finally ending in the root node if nothing's seen before.
When the algorithm reaches a node, it outputs all the dictionary entries that end at the current character position in the input text. This is done by printing every node reached by following the dictionary suffix links, starting from that node, and continuing until it reaches a node with no dictionary suffix link. In addition, the node itself is printed, if it is a dictionary entry.
Execution on input stringabccabyields the following steps:
The original Aho–Corasick algorithm assumes that the set of search strings is fixed. It does not directly apply to applications in which new search strings are added during application of the algorithm. An example is an interactive indexing program, in which the user goes through the text and highlights new words or phrases to index as they see them.Bertrand Meyerintroduced an incremental version of the algorithm in which the search string set can be incrementally extended during the search, retaining the algorithmic complexity of the original.[4]
|
https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm
|
Inprobability theory, abranching random walkis astochastic processthat generalizes both the concept of arandom walkand of abranching process. At every generation (apoint of discrete time), a branching random walk's value is a set of elements that are located in somelinear space, such as thereal line. Each element of a given generation can have several descendants in the next generation. The location of any descendant is the sum of its parent's location and arandom variable.
This process is a spatial expansion of theGalton–Watson process.[1]Its continuous equivalent is called branching Brownian motion.[2][3]
An example of branching random walk can be constructed where the branching process generates exactly two descendants for each element, abinarybranching random walk. Given theinitial conditionthatXϵ= 0, we suppose thatX1andX2are the two children ofXϵ. Further, we suppose that they areindependentN(0, 1)random variables. Consequently, in generation 2, the random variablesX1,1andX1,2are each the sum ofX1and aN(0, 1) random variable. In the next generation, the random variablesX1,2,1andX1,2,2are each the sum ofX1,2and aN(0, 1) random variable. The same construction produces the values at successive times.
Each lineage in the infinite "genealogical tree" produced by this process, such as the sequenceXϵ,X1,X1,2,X1,2,2, ..., forms a conventional random walk.
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Branching_random_walk
|
Brownian motionis the random motion ofparticlessuspended in a medium (aliquidor agas).[2]The traditional mathematical formulation of Brownian motion is that of theWiener process, which is often called Brownian motion, even in mathematical sources.
This motion pattern typically consists ofrandomfluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid atthermal equilibrium, defined by a giventemperature. Within such a fluid, there exists no preferential direction of flow (as intransport phenomena). More specifically, the fluid's overalllinearandangularmomenta remain null over time. Thekinetic energiesof the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid'sinternal energy(theequipartition theorem).[3]
This motion is named after the Scottish botanistRobert Brown, who first described the phenomenon in 1827, while looking through a microscope atpollenof the plantClarkia pulchellaimmersed in water. In 1900, the French mathematicianLouis Bacheliermodeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision ofHenri Poincaré. Then, in 1905, theoretical physicistAlbert Einsteinpublisheda paperwhere he modeled the motion of the pollen particles as being moved by individual watermolecules, making one of his first major scientific contributions.[4]
The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence thatatomsand molecules exist and was further verified experimentally byJean Perrinin 1908. Perrin was awarded theNobel Prize in Physicsin 1926 "for his work on the discontinuous structure of matter".[5]
Themany-body interactionsthat yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied tomolecular populationscan be employed to describe it.[6]Two such models of thestatistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of thestochastic processmodels. There exist sequences of both simpler and more complicated stochastic processes which converge (in thelimit) to Brownian motion (seerandom walkandDonsker's theorem).[7][8]
The Roman philosopher-poetLucretius' scientific poemOn the Nature of Things(c.60 BC) has a remarkable description of the motion ofdustparticles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms:
Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible.
Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by trueBrownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example".[10]
WhileJan Ingenhouszdescribed the irregular motion ofcoaldustparticles on the surface ofalcoholin 1785, the discovery of this phenomenon is often credited to the botanistRobert Brownin 1827. Brown was studyingpollengrains of the plantClarkia pulchellasuspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained.
The mathematics of much of stochastic analysis including the mathematics of Brownian motion was introduced byLouis Bachelierin 1900 in his PhD thesis "The theory of speculation", in which he presented an analysis of the stock and option markets. However this work was largely unknown until the 1950s.[11][12]: 33
Albert Einstein(in one of his1905 papers) provided an explanation of Brownian motion in terms of atoms and molecules at a time when their existence was still debated. Einstein proved the relation between the probability distribution of a Brownian particle and the diffusion equation.[12]: 33These equations describing Brownian motion were subsequently verified by the experimental work ofJean Baptiste Perrinin 1908, leading to his Nobel prize.[13]Norbert Wienergave the first complete and rigorous mathematical analysis in 1923, leading to the underlying mathematical concept being called aWiener process.[12]
The instantaneous velocity of the Brownian motion can be defined asv= Δx/Δt, whenΔt<<τ, whereτis the momentum relaxation time.
In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air withoptical tweezers) was measured successfully. The velocity data verified theMaxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle.[14]
There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to themean squared displacementof a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities.[15]In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or themolecular weightin grams, of a gas.[16]In accordance toAvogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as theAvogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing themolar massof the gas by theAvogadro constant.
The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval.[4]Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014collisions per second.[2]
He regarded the increment of particle positions in timeτ{\displaystyle \tau }in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as arandom variable(q{\displaystyle q}) with someprobability density functionφ(q){\displaystyle \varphi (q)}(i.e.,φ(q){\displaystyle \varphi (q)}is the probability density for a jump of magnitudeq{\displaystyle q}, i.e., the probability density of the particle incrementing its position fromx{\displaystyle x}tox+q{\displaystyle x+q}in the time intervalτ{\displaystyle \tau }). Further, assuming conservation of particle number, he expanded thenumber densityρ(x,t+τ){\displaystyle \rho (x,t+\tau )}(number of particles per unit volume aroundx{\displaystyle x}) at timet+τ{\displaystyle t+\tau }in aTaylor series,ρ(x,t+τ)=ρ(x,t)+τ∂ρ(x,t)∂t+⋯=∫−∞∞ρ(x−q,t)φ(q)dq=Eq[ρ(x−q,t)]=ρ(x,t)∫−∞∞φ(q)dq−∂ρ∂x∫−∞∞qφ(q)dq+∂2ρ∂x2∫−∞∞q22φ(q)dq+⋯=ρ(x,t)⋅1−0+∂2ρ∂x2∫−∞∞q22φ(q)dq+⋯{\displaystyle {\begin{aligned}\rho (x,t+\tau )={}&\rho (x,t)+\tau {\frac {\partial \rho (x,t)}{\partial t}}+\cdots \\[2ex]={}&\int _{-\infty }^{\infty }\rho (x-q,t)\,\varphi (q)\,dq=\mathbb {E} _{q}{\left[\rho (x-q,t)\right]}\\[1ex]={}&\rho (x,t)\,\int _{-\infty }^{\infty }\varphi (q)\,dq-{\frac {\partial \rho }{\partial x}}\,\int _{-\infty }^{\infty }q\,\varphi (q)\,dq+{\frac {\partial ^{2}\rho }{\partial x^{2}}}\,\int _{-\infty }^{\infty }{\frac {q^{2}}{2}}\varphi (q)\,dq+\cdots \\[1ex]={}&\rho (x,t)\cdot 1-0+{\cfrac {\partial ^{2}\rho }{\partial x^{2}}}\,\int _{-\infty }^{\infty }{\frac {q^{2}}{2}}\varphi (q)\,dq+\cdots \end{aligned}}}where the second equality is by definition ofφ{\displaystyle \varphi }. Theintegralin the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other oddmoments) vanish because of space symmetry. What is left gives rise to the following relation:∂ρ∂t=∂2ρ∂x2⋅∫−∞∞q22τφ(q)dq+higher-order even moments.{\displaystyle {\frac {\partial \rho }{\partial t}}={\frac {\partial ^{2}\rho }{\partial x^{2}}}\cdot \int _{-\infty }^{\infty }{\frac {q^{2}}{2\tau }}\varphi (q)\,dq+{\text{higher-order even moments.}}}Where the coefficient after theLaplacian, the second moment of probability of displacementq{\displaystyle q}, is interpreted asmass diffusivityD:D=∫−∞∞q22τφ(q)dq.{\displaystyle D=\int _{-\infty }^{\infty }{\frac {q^{2}}{2\tau }}\varphi (q)\,dq.}Then the density of Brownian particlesρat pointxat timetsatisfies thediffusion equation:∂ρ∂t=D⋅∂2ρ∂x2,{\displaystyle {\frac {\partial \rho }{\partial t}}=D\cdot {\frac {\partial ^{2}\rho }{\partial x^{2}}},}
Assuming thatNparticles start from the origin at the initial timet= 0, the diffusion equation has the solutionρ(x,t)=N4πDtexp(−x24Dt).{\displaystyle \rho (x,t)={\frac {N}{\sqrt {4\pi Dt}}}\exp {\left(-{\frac {x^{2}}{4Dt}}\right)}.}This expression (which is anormal distributionwith the meanμ=0{\displaystyle \mu =0}and varianceσ2=2Dt{\displaystyle \sigma ^{2}=2Dt}usually called Brownian motionBt{\displaystyle B_{t}}) allowed Einstein to calculate themomentsdirectly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given byE[x2]=2Dt.{\displaystyle \mathbb {E} {\left[x^{2}\right]}=2Dt.}This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root.[15]His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point.[17]
The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium.
In his original treatment, Einstein considered anosmotic pressureexperiment, but the same conclusion can be reached in other ways.
Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed ofv=μmg, wheremis the mass of the particle,gis the acceleration due to gravity, andμis the particle'smobilityin the fluid.George Stokeshad shown that the mobility for a spherical particle with radiusrisμ=16πηr{\displaystyle \mu ={\tfrac {1}{6\pi \eta r}}}, whereηis thedynamic viscosityof the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to thebarometric distributionρ=ρoexp(−mghkBT),{\displaystyle \rho =\rho _{o}\,\exp \left({-{\frac {mgh}{k_{\text{B}}T}}}\right),}whereρ−ρois the difference in density of particles separated by a height difference, ofh=z−zo{\displaystyle h=z-z_{o}},kBis theBoltzmann constant(the ratio of theuniversal gas constant,R, to the Avogadro constant,NA), andTis theabsolute temperature.
Dynamic equilibriumis established because the more that particles are pulled down bygravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given byFick's law,J=−Ddρdh,{\displaystyle J=-D{\frac {d\rho }{dh}},}whereJ=ρv. Introducing the formula forρ, we find thatv=DmgkBT.{\displaystyle v={\frac {Dmg}{k_{\text{B}}T}}.}
In a state of dynamical equilibrium, this speed must also be equal tov=μmg. Both expressions forvare proportional tomg, reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identicalcharged particlesof chargeqin a uniformelectric fieldof magnitudeE, wheremgis replaced with theelectrostatic forceqE. Equating these two expressions yields theEinstein relationfor the diffusivity, independent ofmgorqEor other such forces:E[x2]2t=D=μkBT=μRTNA=RT6πηrNA.{\displaystyle {\frac {\mathbb {E} {\left[x^{2}\right]}}{2t}}=D=\mu k_{\text{B}}T={\frac {\mu RT}{N_{\text{A}}}}={\frac {RT}{6\pi \eta rN_{\text{A}}}}.}Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of theBoltzmann constantaskB=R/NA, and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constantR, the temperatureT, the viscosityη, and the particle radiusr, the Avogadro constantNAcan be determined.
The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously byJ. J. Thomson[18]in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by aconcentration gradientgiven by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".[18]
An identical expression to Einstein's formula for the diffusion coefficient was also found byWalther Nernstin 1888[19]in which he expressed the diffusion coefficient as the ratio of theosmotic pressureto the ratio of thefrictional forceand the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given byStokes's law. He writesk′=po/k{\displaystyle k'=p_{o}/k}for the diffusion coefficientk′, wherepo{\displaystyle p_{o}}is the osmotic pressure andkis the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing theideal gas lawper unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's.[20]The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with themean free path.[21]
Confirming Einstein's formula experimentally proved difficult.
Initial attempts byTheodor Svedbergin 1906 and 1907 were critiqued by Einstein and by Perrin as not measuring a quantity directly comparable to the formula.Victor Henriin 1908 took cinematographic shots through a microscope and found quantitative disagreement with the formula but again the analysis was uncertain.[22]Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909.[23][24]The confirmation of Einstein's theory constituted empirical progress for thekinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model ofthermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of thesecond law of thermodynamicsas being an essentially statistical law.[25]
Smoluchowski's theory of Brownian motion[26]starts from the same premise as that of Einstein and derives the same probability distributionρ(x,t)for the displacement of a Brownian particle along thexin timet. He therefore gets the same expression for the mean squared displacement:E[(Δx)2]{\displaystyle \mathbb {E} {\left[(\Delta x)^{2}\right]}}.However, when he relates it to a particle of massmmoving at a velocityuwhich is the result of a frictional force governed by Stokes's law, he findsE[(Δx)2]=2Dt=t3281mu2πμa=t642712mu23πμa,{\displaystyle \mathbb {E} {\left[(\Delta x)^{2}\right]}=2Dt=t{\frac {32}{81}}{\frac {mu^{2}}{\pi \mu a}}=t{\frac {64}{27}}{\frac {{\frac {1}{2}}mu^{2}}{3\pi \mu a}},}whereμis the viscosity coefficient, andais the radius of the particle. Associating the kinetic energymu2/2{\displaystyle mu^{2}/2}with the thermal energyRT/N, the expression for the mean squared displacement is64/27times that found by Einstein. The fraction 27/64 was commented on byArnold Sommerfeldin his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt."[27]
Smoluchowski[28]attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal.
If the probability ofmgains andn−mlosses follows abinomial distribution,Pm,n=(nm)2−n,{\displaystyle P_{m,n}={\binom {n}{m}}2^{-n},}with equala prioriprobabilities of 1/2, the mean total gain isE[2m−n]=∑m=n2n(2m−n)Pm,n=nn!2n+1[(n2)!]2.{\displaystyle \mathbb {E} {\left[2m-n\right]}=\sum _{m={\frac {n}{2}}}^{n}(2m-n)P_{m,n}={\frac {nn!}{2^{n+1}\left[\left({\frac {n}{2}}\right)!\right]^{2}}}.}
Ifnis large enough so that Stirling's approximation can be used in the formn!≈(ne)n2πn,{\displaystyle n!\approx \left({\frac {n}{e}}\right)^{n}{\sqrt {2\pi n}},}then the expected total gain will be[citation needed]E[2m−n]≈n2π,{\displaystyle \mathbb {E} {\left[2m-n\right]}\approx {\sqrt {\frac {n}{2\pi }}},}showing that it increases as the square root of the total population.
Suppose that a Brownian particle of massMis surrounded by lighter particles of massmwhich are traveling at a speedu. Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will bemu/M. This ratio is of the order of10−7cm/s. But we also have to take into consideration that in a gas there will be more than 1016collisions in a second, and even greater in a liquid where we expect that there will be 1020collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108to 1010collisions in one second, then velocity of the Brownian particle may be anywhere between10–1000 cm/s. Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts.
These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle,U, which depends on the collisions that tend to accelerate and decelerate it. The largerUis, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle,MU2/2{\displaystyle MU^{2}/2},will be equal, on the average, to the kinetic energy of the surrounding fluid particle,mu2/2{\displaystyle mu^{2}/2}.
In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion.[29]The model assumes collisions withM≫mwhereMis the test particle's mass andmthe mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude ofΔV. IfNRis the number of collisions from the right andNLthe number of collisions from the left then afterNcollisions the particle's velocity will have changed byΔV(2NR−N). Themultiplicityis then simply given by:(NNR)=N!NR!(N−NR)!{\displaystyle {\binom {N}{N_{\text{R}}}}={\frac {N!}{N_{\text{R}}!(N-N_{\text{R}})!}}}and the total number of possible states is given by2N. Therefore, the probability of the particle being hit from the rightNRtimes is:PN(NR)=N!2NNR!(N−NR)!{\displaystyle P_{N}(N_{\text{R}})={\frac {N!}{2^{N}N_{\text{R}}!(N-N_{\text{R}})!}}}
As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possibleΔVs instead of always just one in a realistic situation.
Thediffusion equationyields an approximation of the time evolution of theprobability density functionassociated with the position of the particle going under a Brownian movement under the physical definition. The approximation becomes valid on timescales much larger than the timescale of individual atomic collisions, since it does not include a term to describe the acceleration of particles during collision. The time evolution of the position of the Brownian particle over all time scales described using theLangevin equation, an equation that involves a random force field representing the effect of thethermal fluctuationsof the solvent on the particle.[14]At longer times scales, where acceleration is negligible, individual particle dynamics can be approximated usingBrownian dynamicsin place ofLangevin dynamics.
Instellar dynamics, a massive body (star,black hole, etc.) can experience Brownian motion as it responds togravitational forcesfrom surrounding stars.[30]The rms velocityVof the massive object, of massM, is related to the rms velocityv⋆{\displaystyle v_{\star }}of the background stars byMV2≈mv⋆2{\displaystyle MV^{2}\approx mv_{\star }^{2}}wherem≪M{\displaystyle m\ll M}is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing bothv⋆{\displaystyle v_{\star }}andV.[30]The Brownian velocity ofSgr A*, thesupermassive black holeat the center of theMilky Way galaxy, is predicted from this formula to be less than 1 km s−1.[31]
Inmathematics, Brownian motion is described by theWiener process, a continuous-timestochastic processnamed in honor ofNorbert Wiener. It is one of the best knownLévy processes(càdlàgstochastic processes withstationaryindependent increments) and occurs frequently in pure and applied mathematics,economicsandphysics.
The Wiener processWtis characterized by four facts:[32]
N(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}denotes thenormal distributionwithexpected valueμandvarianceσ2. The condition that it has independent increments means that if0≤s1<t1≤s2<t2{\displaystyle 0\leq s_{1}<t_{1}\leq s_{2}<t_{2}}thenWt1−Ws1{\displaystyle W_{t_{1}}-W_{s_{1}}}andWt2−Ws2{\displaystyle W_{t_{2}}-W_{s_{2}}}are independent random variables. In addition, for somefiltrationFt{\displaystyle {\mathcal {F}}_{t}},Wt{\displaystyle W_{t}}isFt{\displaystyle {\mathcal {F}}_{t}}measurablefor allt≥0{\displaystyle t\geq 0}.
An alternative characterisation of the Wiener process is the so-calledLévy characterisationthat says that the Wiener process is an almost surely continuousmartingalewithW0= 0andquadratic variation[Wt,Wt]=t{\displaystyle [W_{t},W_{t}]=t}.
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independentN(0,1){\displaystyle {\mathcal {N}}(0,1)}random variables. This representation can be obtained using theKosambi–Karhunen–Loève theorem.
The Wiener process can be constructed as thescaling limitof arandom walk, or other discrete-time stochastic processes with stationary independent increments. This is known asDonsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixedneighborhoodof the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it isscale invariant.
A d-dimensionalGaussian free fieldhas been described as "a d-dimensional-time analog of Brownian motion."[33]
The Brownian motion can be modeled by arandom walk.[34]
In the general case, Brownian motion is aMarkov processand described bystochastic integral equations.[35]
The French mathematicianPaul Lévyproved the following theorem, which gives a necessary and sufficient condition for a continuousRn-valued stochastic processXto actually ben-dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion.
LetX= (X1, ...,Xn)be a continuous stochastic process on aprobability space(Ω, Σ,P)taking values inRn. Then the following are equivalent:
The spectral content of a stochastic processXt{\displaystyle X_{t}}can be found from thepower spectral density, formally defined asS(ω)=limT→∞1TE{|∫0TeiωtXtdt|2},{\displaystyle S(\omega )=\lim _{T\to \infty }{\frac {1}{T}}\mathbb {E} \left\{\left|\int _{0}^{T}e^{i\omega t}X_{t}dt\right|^{2}\right\},}whereE{\displaystyle \mathbb {E} }stands for theexpected value. The power spectral density of Brownian motion is found to be[36]SBM(ω)=4Dω2.{\displaystyle S_{BM}(\omega )={\frac {4D}{\omega ^{2}}}.}whereDis thediffusion coefficientofXt. For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e.,S(1)(ω,T)=1T|∫0TeiωtXtdt|2,{\displaystyle S^{(1)}(\omega ,T)={\frac {1}{T}}\left|\int _{0}^{T}e^{i\omega t}X_{t}dt\right|^{2},}which for an individual realization of a Brownian motion trajectory,[37]it is found to have expected valueμBM(ω,T){\displaystyle \mu _{BM}(\omega ,T)}μBM(ω,T)=4Dω2[1−sin(ωT)ωT]{\displaystyle \mu _{\text{BM}}(\omega ,T)={\frac {4D}{\omega ^{2}}}\left[1-{\frac {\sin \left(\omega T\right)}{\omega T}}\right]}andvarianceσBM2(ω,T){\displaystyle \sigma _{\text{BM}}^{2}(\omega ,T)}[37]σS2(f,T)=E{(ST(j)(f))2}−μS2(f,T)=20D2f4[1−(6−cos(fT))2sin(fT)5fT+(17−cos(2fT)−16cos(fT))10f2T2].{\displaystyle \sigma _{S}^{2}(f,T)=\mathbb {E} \left\{\left(S_{T}^{(j)}(f)\right)^{2}\right\}-\mu _{S}^{2}(f,T)={\frac {20D^{2}}{f^{4}}}\left[1-{\Big (}6-\cos \left(fT\right){\Big )}{\frac {2\sin \left(fT\right)}{5fT}}+{\frac {{\Big (}17-\cos \left(2fT\right)-16\cos \left(fT\right){\Big )}}{10f^{2}T^{2}}}\right].}
For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral densityS(ω){\displaystyle S(\omega )},but its coefficient of variationγ=σ/μ{\displaystyle \gamma =\sigma /\mu }tends to5/2{\displaystyle {\sqrt {5}}/2}.This implies the distribution ofS(1)(ω,T){\displaystyle S^{(1)}(\omega ,T)}is broad even in the infinite time limit.
Brownian motion is usually considered to take place inEuclidean space. It is natural to consider how such motion generalizes to more complex shapes, such assurfacesor higher dimensionalmanifolds. The formalization requires the space to possess some form of aderivative, as well as ametric, so that aLaplaciancan be defined. Both of these are available onRiemannian manifolds.
Riemannian manifolds have the property thatgeodesicscan be described inpolar coordinates; that is, displacements are always in a radial direction, at some given angle. Uniform random motion is then described by Gaussians along the radial direction, independent of the angle, the same as in Euclidean space.
Theinfinitesimal generator(and hencecharacteristic operator) of Brownian motion on EuclideanRnis1/2Δ, whereΔdenotes theLaplace operator. Brownian motion on anm-dimensionalRiemannian manifold(M,g)can be defined as diffusion onMwith the characteristic operator given by1/2ΔLB, half theLaplace–Beltrami operatorΔLB.
One of the topics of study is a characterization of thePoincaré recurrence timefor such systems.[13]
Thenarrow escape problemis a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion,molecule, orprotein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation asingular perturbationproblem.
|
https://en.wikipedia.org/wiki/Brownian_motion
|
Inprobability theory, thelaw of the iterated logarithmdescribes the magnitude of the fluctuations of arandom walk. The original statement of the law of the iterated logarithm is due toA. Ya. Khinchin(1924).[1]Another statement was given byA. N. Kolmogorovin 1929.[2]
Let {Yn} be independent, identically distributedrandom variableswith zero means and unit variances. LetSn=Y1+ ... +Yn. Then
where "log" is thenatural logarithm, "lim sup" denotes thelimit superior, and "a.s." stands for "almost surely".[3][4]
Another statement given byA. N. Kolmogorovin 1929[2]is as follows.
Let{Yn}{\displaystyle \{Y_{n}\}}be independentrandom variableswith zero means and finite variances. LetSn=Y1+⋯+Yn{\displaystyle S_{n}=Y_{1}+\dots +Y_{n}}andBn=Var(Y1)+⋯+Var(Yn){\displaystyle B_{n}=\operatorname {Var} (Y_{1})+\dots +\operatorname {Var} (Y_{n})}. IfBn→∞{\displaystyle B_{n}\to \infty }and there exists a sequence of positive constants{Mn}{\displaystyle \{M_{n}\}}such that|Yn|≤Mn{\displaystyle |Y_{n}|\leq M_{n}}a.s. and
then we have
Note that, the first statement covers the case of the standard normal distribution, but the second does not.
The law of iterated logarithms operates "in between" thelaw of large numbersand thecentral limit theorem. There are two versions of the law of large numbers —the weakandthe strong— and they both state that the sumsSn, scaled byn−1, converge to zero, respectivelyin probabilityandalmost surely:
On the other hand, the central limit theorem states that the sumsSnscaled by the factorn−1/2converge in distribution to a standard normal distribution. ByKolmogorov's zero–one law, for any fixedM, the probability that the eventlim supnSnn≥M{\displaystyle \limsup _{n}{\frac {S_{n}}{\sqrt {n}}}\geq M}occurs is 0 or 1.
Then
so
An identical argument shows that
This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality
and the fact that the random variables
are independent and both converge in distribution toN(0,1).{\displaystyle {\mathcal {N}}(0,1).}
Thelaw of the iterated logarithmprovides the scaling factor where the two limits become different:
Thus, although the absolute value of the quantitySn/2nloglogn{\displaystyle S_{n}/{\sqrt {2n\log \log n}}}is less than any predefinedε> 0 with probability approaching one, it will nevertheless almost surely be greater thanεinfinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely.
The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back toKhinchinandKolmogorovin the 1920s.
Since then, there has been a tremendous amount of work on the LIL for various kinds of
dependent structures and for stochastic processes. The following is a small sample of notable developments.
Hartman–Wintner(1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL.[5]
Chung(1948) proved another version of the law of the iterated logarithm for the absolute value of a brownian motion.[6]
Strassen(1964) studied the LIL from the point of view of invariance principles.[7]
Stout (1970) generalized the LIL to stationary ergodic martingales.[8]
Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions.[9]
Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence).[10]This is notable, as it is outside the realm of classical probability theory.
Yongge Wang(1996) showed that the law of the iterated logarithm holds for polynomial time pseudorandom sequences also.[11][12]The Java-based softwaretesting tooltests whether a pseudorandom generator outputs sequences that satisfy the LIL.
Balsubramani (2014) proved a non-asymptotic LIL that holds over finite-timemartingalesample paths.[13]This subsumes the martingale LIL as it provides matching finite-sample concentration and anti-concentration bounds, and enables sequential testing[14]and other applications.[15]
|
https://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm
|
TheLévy flight foraging hypothesisis ahypothesisin the field ofbiologythat may be stated as follows:
SinceLévy flightsand walks can optimize search efficiencies, therefore natural selection should have led to adaptations for Lévy flight foraging.[1]
The movement of animals closely resembles in many ways therandom walks of dust particles in a fluid.[2]This similarity led to interest in trying to understand how animals move via the analogy to Brownian motion. This conventional wisdom held until the early 1990s. However, starting in the late 1980s, evidence began to accumulate that did not fit the theoretical predictions.[2]
In 1999, a theoretical investigation of the properties ofLévy flightsshowed that an inverse square distribution of flight times or distances could optimize the search efficiency under certain circumstances.[3]Specifically, a search based on an inverse-square Lévy walk, consisting of a constant velocity search following a path whose length is distributed over an inverse square Levy stable distribution, is optimal for searching sparsely and randomly distributed revisitable targets in the absence of memory. These results have been published in 1999 in the journalNature.[3]
There has been some controversy about the reality of Lévy flight foraging. Early studies were limited to a small range of movement, and thus the type of motion could not be unequivocally determined; and in 2007 flaws were found in a study of wandering albatrosses which was the first empirical example of such a strategy.[4]There are however many new studies backing the Lévy flight foraging hypothesis.[5][6][7][8]
Recent studies use newer statistical methods[9]and larger data sets showing longer movement paths.[10]Studies published in 2012 and 2013 re-analysed wandering albatross foraging paths and concluded strong support for truncated Lévy flights and Brownian walks consistently with predictions of the Lévy flight foraging hypothesis.[11][12]
From the theoretical point of view, a recent study[13]disputes the validity of the optimality result published in 1999, by concluding that for bi- or tri-dimensional random walks, this result is only valid for very specific conditions: (i) once a target has been foraged, it has to reappear infinitely fast, (ii) the typical scale of the animal displacement has to be very small compared to the typical size of the targets, (iii) after a target is found, the animal has to start a new random walk infinitely close to the border of this target. If any of these conditions is not valid, the optimality result does not hold: inverse-square Levy walks are not optimal, and the gain of any optimal Levy walk over others is necessarily marginal (in the sense that it does not diverge when the density of targets is low).
In contrast, assuming that the search is intermittent[14](i.e., detection is possible only at the short pauses between jumps), a different argument for the optimality of the inverse-square Lévy walk has been given.[15]Mathematical arguments show that in finite two-dimensional domains the intermittent inverse-square Lévy walk is optimal when the goal is to minimize the search time until finding a target of unpredictable size. In contrast, any intermittent Lévy walks other than the inverse-square walk fail to efficiently find either small or large targets. In other words, the inverse-square Lévy walk stands out as the only intermittent Lévy process that is highly efficient with respect to all target scales without the need for any adaptation. This result highlights the relationships between the detection ability of the searcher and the robustness and speed of the search.[15]
Another mathematical argument which shows that inverse-square Lévy walks are not generally optimal has been subsequently provided by studying the search efficiency of a group of individuals that have to find a single target in the infinite two-dimensional gridZ2{\displaystyle \mathbb {Z} ^{2}}.[16]In particular, it has been considered a setting withk{\displaystyle k}individuals that start performing a Lévy walk at the origin of the grid (a nest-site), and where there is a target at some fixed (Manhattan) distanceD{\displaystyle D}from the origin;D{\displaystyle D}must be at most some exponential function ink{\displaystyle k}, which is a reasonable assumption since otherwise the target might not be found with non-negligible probability. It can then be proven that the target is found in almost-optimal time with high probability if the exponent of thepower-lawdensity distribution isα⋆∼3−logk/logD{\displaystyle \alpha ^{\star }\sim 3-\log k/\log D}. Any constant deviation fromα⋆{\displaystyle \alpha ^{\star }}results in sub-optimal hitting time. However, such a choice for the power-law exponent requires the knowledge, by the individuals, of both the number of individualsk{\displaystyle k}and the target distanceD{\displaystyle D}, which may be a very strong assumption in living societies. For this reason, it has been provided a simple almost-optimal search strategy without such requirements: if each individual samples uniformly at random the power-law exponent from the interval(2,3){\displaystyle (2,3)}and then performs the corresponding Lévy walk, the target is still found in almost-optimal time with high probability. This strategy surprisingly achieves near-optimal search efficiency for all distance scales, and implies that different members of the same group follow different search patterns.
The existence of such variation in the search patterns among individuals of the same species requires empirical validation.[16]These results highlight that Lévy walks are indeed optimal search strategies, but there isn't any power-law exponent playing a universal role; instead, in the latter setting, any exponent between2{\displaystyle 2}and3{\displaystyle 3}might be employed depending on the number of individualsk{\displaystyle k}and the target distanceD{\displaystyle D}.
|
https://en.wikipedia.org/wiki/L%C3%A9vy_flight_foraging_hypothesis
|
Inmathematics,loop-erased random walkis a model for arandomsimple pathwith important applications incombinatorics,physicsandquantum field theory. It is intimately connected to theuniform spanning tree, a model for a randomtree. See alsorandom walkfor more general treatment of this topic.
AssumeGis somegraphandγ{\displaystyle \gamma }is somepathof lengthnonG. In other words,γ(1),…,γ(n){\displaystyle \gamma (1),\dots ,\gamma (n)}are vertices ofGsuch thatγ(i){\displaystyle \gamma (i)}andγ(i+1){\displaystyle \gamma (i+1)}are connected by an edge. Then theloop erasureofγ{\displaystyle \gamma }is a new simple path created by erasing all the loops ofγ{\displaystyle \gamma }in chronological order. Formally, we define indicesij{\displaystyle i_{j}}inductivelyusing
where "max" here means up to the length of the pathγ{\displaystyle \gamma }. The induction stops when for someij{\displaystyle i_{j}}we haveγ(ij)=γ(n){\displaystyle \gamma (i_{j})=\gamma (n)}.
In words, to findij+1{\displaystyle i_{j+1}}, we holdγ(ij){\displaystyle \gamma (i_{j})}in one hand, and with the other hand, we trace back from the end:γ(n),γ(n−1),...{\displaystyle \gamma (n),\gamma (n-1),...}, until we either hit someγ(k)=γ(ij){\displaystyle \gamma (k)=\gamma (i_{j})}, in which case we setij+1=k+1{\displaystyle i_{j+1}=k+1}, or we end up atγ(ij){\displaystyle \gamma (i_{j})}, in which case we setij+1=ij+1{\displaystyle i_{j+1}=i_{j}+1}.
Assume the induction stops atJi.e.γ(iJ)=γ(n){\displaystyle \gamma (i_{J})=\gamma (n)}is the lastiJ{\displaystyle i_{J}}. Then the loop erasure ofγ{\displaystyle \gamma }, denoted byLE(γ){\displaystyle \mathrm {LE} (\gamma )}is a simple path of lengthJdefined by
Now letGbe some graph, letvbe a vertex ofG, and letRbe a random walk onGstarting fromv. LetTbe somestopping timeforR. Then theloop-erased random walkuntil timeTis LE(R([1,T])). In other words, takeRfrom its beginning untilT— that's a (random) path — erase all the loops in chronological order as above — you get a random simple path.
The stopping timeTmay be fixed, i.e. one may performnsteps and then loop-erase. However, it is usually more natural to takeTto be thehitting timein some set. For example, letGbe the graphZ2and letRbe a random walk starting from the point (0,0). LetTbe the time whenRfirst hits the circle of radius 100 (we mean here of course adiscretizedcircle). LE(R) is called the loop-erased random walk starting at (0,0) and stopped at the circle.
For any graphG, aspanning treeofGis asubgraphofGcontaining all vertices and some of the edges, which is atree, i.e.connectedand with nocycles. Aspanning treechosen randomly from among all possible spanning treeswith equal probabilityis called a uniform spanning tree. There are typically exponentially many spanning trees (too many to generate them all and then choose one randomly); instead, uniform spanning trees can be generated more efficiently by an algorithm called Wilson's algorithm which uses loop-erased random walks.
The algorithm proceeds according to the following steps. First, construct a single-vertex treeTby choosing (arbitrarily) one vertex. Then, while the treeTconstructed so far does not yet include all of the vertices of the graph, letvbe an arbitrary vertex that is not inT, perform a loop-erased random walk fromvuntil reaching a vertex inT, and add the resulting path toT. Repeating this process until all vertices are included produces a uniformly distributed tree, regardless of the arbitrary choices of vertices at each step.
A connection in the other direction is also true. Ifvandware two vertices inGthen, in any spanning tree, they are connected by a unique path. Taking this path in theuniformspanning tree gives a random simple path. It turns out that the distribution of this path is identical to the distribution of the loop-erased random walk starting atvand stopped atw. This fact can be used to justify the correctness of Wilson's algorithm. Another corollary is that loop-erased random walk is symmetric in its start and end points. More precisely, the distribution of the loop-erased random walk starting atvand stopped atwis identical to the distribution of the reversal of loop-erased random walk starting atwand stopped atv. Loop-erasing a random walk and the reverse walk do not, in general, give the same result, but according to this result the distributions of the two loop-erased walks are identical.
Another representation of loop-erased random walk stems from solutions of thediscreteLaplace equation. LetGagain be a graph and letvandwbe two vertices inG. Construct a random path fromvtowinductively using the following procedure. Assume we have already definedγ(1),...,γ(n){\displaystyle \gamma (1),...,\gamma (n)}. Letfbe a function fromGtoRsatisfying
Where a functionfon a graph is discretely harmonic at a pointxiff(x) equals the average offon the neighbors ofx.
Withfdefined chooseγ(n+1){\displaystyle \gamma (n+1)}usingfat the neighbors ofγ(n){\displaystyle \gamma (n)}as weights. In other words, ifx1,...,xd{\displaystyle x_{1},...,x_{d}}are these neighbors, choosexi{\displaystyle x_{i}}with probability
Continuing this process, recalculatingfat each step, will result in a random simple path fromvtow; the distribution of this path is identical to that of a loop-erased random walk fromvtow.[citation needed]
An alternative view is that the distribution of a loop-erased random walkconditionedto start in some path β is identical to the loop-erasure of a random walk conditioned not to hit β. This property is often referred to as theMarkov propertyof loop-erased random walk (though the relation to the usualMarkov propertyis somewhat vague).
It is important to notice that while the proof of the equivalence is quite easy, models which involve dynamically changing harmonic functions or measures are typically extremely difficult to analyze. Practically nothing is known about thep-Laplacian walkordiffusion-limited aggregation. Another somewhat related model is theharmonic explorer.
Finally there is another link that should be mentioned:Kirchhoff's theoremrelates the number of spanning trees of a graphGto theeigenvaluesof the discreteLaplacian. Seespanning treefor details.
Letdbe the dimension, which we will assume to be at least 2. ExamineZdi.e. all the points(a1,...,ad){\displaystyle (a_{1},...,a_{d})}with integerai{\displaystyle a_{i}}. This is an infinite graph with degree 2dwhen you connect each point to its nearest neighbors. From now on we will consider loop-erased random walk on this graph or its subgraphs.
The easiest case to analyze is dimension 5 and above. In this case it turns out that there the intersections are only local. A calculation shows that if one takes a random walk of lengthn, its loop-erasure has length of the same order of magnitude, i.e.n. Scaling accordingly, it turns out that loop-erased random walk converges (in an appropriate sense) toBrownian motionasngoes to infinity. Dimension 4 is more complicated, but the general picture is still true. It turns out that the loop-erasure of a random walk of lengthnhas approximatelyn/log1/3n{\displaystyle n/\log ^{1/3}n}vertices, but again, after scaling (that takes into account the logarithmic factor) the loop-erased walk converges to Brownian motion.
In two dimensions, arguments fromconformal field theoryand simulation results led to a number of exciting conjectures. AssumeDis somesimply connecteddomainin the plane andxis a point inD. Take the graphGto be
that is, a grid of side length ε restricted toD. Letvbe the vertex ofGclosest tox. Examine now a loop-erased random walk starting fromvand stopped when hitting the "boundary" ofG, i.e. the vertices ofGwhich correspond to the boundary ofD. Then the conjectures are
The first attack at these conjectures came from the direction ofdomino tilings. Taking a spanning tree ofGand adding to it itsplanar dualone gets adominotiling of a special derived graph (call itH). Each vertex ofHcorresponds to a vertex, edge or face ofG, and the edges ofHshow which vertex lies on which edge and which edge on which face. It turns out that taking a uniform spanning tree ofGleads to a uniformly distributed random domino tiling ofH. The number of domino tilings of a graph can be calculated using the determinant of special matrices, which allow to connect it to the discreteGreen functionwhich is approximately conformally invariant. These arguments allowed to show that certain measurables of loop-erased random walk are (in the limit) conformally invariant, and that theexpectednumber of vertices in a loop-erased random walk stopped at a circle of radiusris of the order ofr5/4{\displaystyle r^{5/4}}.[1]
In 2002 these conjectures were resolved (positively) usingstochastic Löwner evolution. Very roughly, it is a stochastic conformally invariantordinary differential equationwhich allows to catch the Markov property of loop-erased random walk (and many other probabilistic processes).
The scaling limit exists and is invariant under rotations and dilations.[2]IfL(r){\displaystyle L(r)}denotes the expected number of vertices in the loop-erased random walk until it gets to a distance ofr, then
where ε,candCare some positive numbers[3](the numbers can, in principle, be calculated from the proofs, but the author did not do it). This suggests that the scaling limit should have Hausdorff dimension between1+ε{\displaystyle 1+\varepsilon }and 5/3 almost surely. Numerical experiments show that it should be1.62400±0.00005{\displaystyle 1.62400\pm 0.00005}.[4]
|
https://en.wikipedia.org/wiki/Loop-erased_random_walk
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.