text
stringlengths 16
5.39k
| source
stringlengths 32
121
|
|---|---|
Inmetadata, the termdata elementis an atomic unit of data that has precise meaning or precise semantics. A data element has:
Data elements usage can be discovered by inspection ofsoftware applicationsor applicationdata filesthrough a process of manual or automatedApplication Discovery and Understanding. Once data elements are discovered they can be registered in ametadata registry.
Intelecommunications, the termdata elementhas the following components:
In the areas ofdatabasesanddata systemsmore generally a data element is a concept forming part of adata model. As an element of data representation, a collection of data elements forms adata structure.[1]
In practice, data elements (fields, columns, attributes, etc.) are sometimes "overloaded", meaning a given data element will have multiple potential meanings. While a known bad practice, overloading is nevertheless a very real factor or barrier to understanding what a system is doing.
Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Data_element
|
In themathematicalfield ofFourier analysis, theconjugate Fourier seriesarises by realizing the Fourier series formally as the boundary values of thereal partof aholomorphic functionon theunit disc. Theimaginary partof that function then defines the conjugate series.Zygmund (1968)studied the delicate questions of convergence of this series, and its relationship with theHilbert transform.
In detail, consider atrigonometric seriesof the form
in which the coefficientsanandbnarereal numbers. This series is the real part of thepower series
along theunit circlewithz=eiθ{\displaystyle z=e^{i\theta }}. The imaginary part ofF(z) is called theconjugate seriesoff, and is denoted
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conjugate_Fourier_series
|
Inmathematics, aquotient categoryis acategoryobtained from another category by identifying sets ofmorphisms. Formally, it is aquotient objectin thecategory of (locally small) categories, analogous to aquotient grouporquotient space, but in the categorical setting.
LetCbe a category. Acongruence relationRonCis given by: for each pair of objectsX,YinC, anequivalence relationRX,Yon Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if
are related in Hom(X,Y) and
are related in Hom(Y,Z), theng1f1andg2f2are related in Hom(X,Z).
Given a congruence relationRonCwe can define thequotient categoryC/Ras the category whose objects are those ofCand whose morphisms areequivalence classesof morphisms inC. That is,
Composition of morphisms inC/Riswell-definedsinceRis a congruence relation.
There is a natural quotientfunctorfromCtoC/Rwhich sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is afull functor).
Every functorF:C→Ddetermines a congruence onCby sayingf~giffF(f) =F(g). The functorFthen factors through the quotient functorC→C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories.
IfCis anadditive categoryand we require the congruence relation ~ onCto be additive (i.e. iff1,f2,g1andg2are morphisms fromXtoYwithf1~f2andg1~g2, thenf1+g1~f2+g2), then the quotient categoryC/~ will also be additive, and the quotient functorC→C/~ will be an additive functor.
The concept of an additive congruence relation is equivalent to the concept of atwo-sided ideal of morphisms: for any two objectsXandYwe are given an additive subgroupI(X,Y) of HomC(X,Y) such that for allf∈I(X,Y),g∈ HomC(Y,Z) andh∈ HomC(W,X), we havegf∈I(X,Z) andfh∈I(W,Y). Two morphisms in HomC(X,Y) are congruent iff their difference is inI(X,Y).
Every unitalringmay be viewed as an additive category with a single object, and the quotient of additive categories defined above coincides in this case with the notion of aquotient ringmodulo a two-sided ideal.
Thelocalization of a categoryintroduces new morphisms to turn several of the original category's morphisms into isomorphisms. This tends to increase the number of morphisms between objects, rather than decrease it as in the case of quotient categories. But in both constructions it often happens that two objects become isomorphic that weren't isomorphic in the original category.
TheSerre quotientof anabelian categoryby aSerre subcategoryis a new abelian category which is similar to a quotient category but also in many cases has the character of a localization of the category.
|
https://en.wikipedia.org/wiki/Quotient_category
|
"Talking past each other" is an English phrase describing the situation where two or more people talk about different subjects, while believing that they are talking about the same thing.[1]
David Horton writes that when characters in fiction talk past each other, the effect is to expose "an unbridgeable gulf between their respective perceptions and intentions. The result is an exchange, but never an interchange, of words in fragmented and cramped utterances whose subtext often reveals more than their surface meaning."[2]
The phrase is used in widely varying contexts. For example, in 1917,Albert EinsteinandDavid Hilberthad dawn-to-dusk discussions of physics; and they continued their debate in writing, althoughFelix Kleinrecords that they "talked past each other, as happens not infrequently between simultaneously producing mathematicians."[3]
|
https://en.wikipedia.org/wiki/Talking_past_each_other
|
Game Description Language(GDL) is a specializedlogicprogramming languagedesigned byMichael Genesereth. The goal of GDL is to allow the development of AI agents capable ofgeneral game playing. It is part of the General Game Playing Project atStanford University.
GDL is a tool for expressing the intricacies of game rules and dynamics in a form comprehensible to AI systems through a combination of logic-based constructs and declarative principles.
In practice, GDL is often used for General Game Playing competitions and research endeavors. In these contexts, GDL is used to specify the rules of games that AI agents are expected to play. AI developers and researchers harness GDL to create algorithms that can comprehend and engage with games based on their rule descriptions. The use of GDL paves the way for the development of highly adaptable AI agents, capable of competing and excelling in diverse gaming scenarios.
This innovation is a testament to the convergence of logic-based formalism and the world of games, opening new horizons for AI's potential in understanding and mastering a multitude of games. Game Description Language equips AI with a universal key to unlock the mysteries of diverse game environments and strategies.
Quoted in an article inNew Scientist, Genesereth pointed out that althoughDeep Bluecan play chess at agrandmasterlevel, it is incapable of playingcheckersat all because it is a specialized game player.[1]Both chess and checkers can be described in GDL. This enables general game players to be built that can play both of these games and any other game that can be described using GDL.
GDL is a variant ofDatalog, and thesyntaxis largely the same. It is usually given inprefix notation. Variables begin with "?".[2]
The following is the list of keywords in GDL, along with brief descriptions of their functions:
A game description in GDL provides complete rules for each of the following elements of a game.
Facts that define the roles in a game. The following example is from a GDL description of the two-player gameTic-tac-toe:
Rules that entail all facts about the initial game state. An example is:
Rules that describe each move by the conditions on the current position under which it can be taken by a player. An example is:
Rules that describe all facts about the next state relative to the current state and the moves taken by the players. An example is:
Rules that describe the conditions under which the current state is a terminal one. An example is:
The goal values for each player in a terminal state. An example is:
With GDL, one can describe finite games with an arbitrary number of players. However, GDL cannot describe games that contain an element of chance (for example, rolling dice) or games where players have incomplete information about the current state of the game (for example, in many card games the opponents' cards are not visible).GDL-II, theGame Description Language for Incomplete Information Games, extends GDL by two keywords that allow for the description of elements of chance and incomplete information:[3]
The following is an example from a GDL-II description of the card gameTexas hold 'em:
Michael Thielscher also created a further extension,GDL-III, a general game description language withimperfect informationandintrospection, that supports the specification ofepistemic games— ones characterised by rules that depend on the knowledge of players.[4]
In classical game theory, games can be formalised inextensiveandnormalforms. Forcooperative game theory, games are represented using characteristic functions. Some subclasses of games allow special representations in smaller sizes also known assuccinct games.
Some of the newer developments of formalisms and languages for the representation of some subclasses of games or representations adjusted to the needs of interdisciplinary research are summarized as the following table.[5]Some of these alternative representations also encode time-related aspects:
A 2016 paper "describes a multilevel algorithm compiling a general game description in GDL into an optimized reasoner in a low level language".[19]
A 2017 paper uses GDL to model the process of mediating a resolution to a dispute between two parties and presented an algorithm that uses available information efficiently to do so.[20]
|
https://en.wikipedia.org/wiki/Game_Description_Language
|
This is a list of topics aroundBoolean algebraandpropositional logic.
|
https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics
|
Inmathematical analysis, adomainorregionis anon-empty,connected, andopen setin atopological space. In particular, it is any non-empty connected opensubsetof thereal coordinate spaceRnor thecomplex coordinate spaceCn. A connected open subset ofcoordinate spaceis frequently used for thedomain of a function.[1]
The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the termdomain,[2]some use the termregion,[3]some use both terms interchangeably,[4]and some define the two terms slightly differently;[5]some avoid ambiguity by sticking with a phrase such asnon-empty connected open subset.[6]
One common convention is to define adomainas a connected open set but aregionas theunionof a domain with none, some, or all of itslimit points.[7]Aclosed regionorclosed domainis the union of a domain and all of its limit points.
Various degrees of smoothness of theboundaryof the domain are required for various properties of functions defined on the domain to hold, such as integral theorems (Green's theorem,Stokes theorem), properties ofSobolev spaces, and to definemeasureson the boundary and spaces oftraces(generalized functions defined on the boundary). Commonly considered types of domains are domains withcontinuousboundary,Lipschitz boundary,C1boundary, and so forth.
Abounded domainis a domain that isbounded, i.e., contained in some ball.Bounded regionis defined similarly. Anexterior domainorexternal domainis a domain whosecomplementis bounded; sometimes smoothness conditions are imposed on its boundary.
Incomplex analysis, acomplex domain(or simplydomain) is any connected open subset of thecomplex planeC. For example, the entire complex plane is a domain, as is the openunit disk, the openupper half-plane, and so forth. Often, a complex domain serves as thedomain of definitionfor aholomorphic function. In the study ofseveral complex variables, the definition of a domain is extended to include any connected open subset ofCn.
InEuclidean spaces,one-,two-, andthree-dimensionalregions arecurves,surfaces, andsolids, whose extent are called, respectively,length,area, andvolume.
Definition. An open set is connected if it cannot be expressed as the sum of two open sets. An open connected set is called a domain.
German:Eine offene Punktmenge heißt zusammenhängend, wenn man sie nicht als Summe von zwei offenen Punktmengen darstellen kann. Eine offene zusammenhängende Punktmenge heißt ein Gebiet.
According toHans Hahn,[8]the concept of a domain as an open connected set was introduced byConstantin Carathéodoryin his famous book (Carathéodory 1918).
In this definition, Carathéodory considers obviouslynon-emptydisjointsets.
Hahn also remarks that the word "Gebiet" ("Domain") was occasionally previously used as asynonymofopen set.[9]The rough concept is older. In the 19th and early 20th century, the termsdomainandregionwere often used informally (sometimes interchangeably) without explicit definition.[10]
However, the term "domain" was occasionally used to identify closely related but slightly different concepts. For example, in his influentialmonographsonelliptic partial differential equations,Carlo Mirandauses the term "region" to identify an open connected set,[11][12]and reserves the term "domain" to identify an internally connected,[13]perfect set, each point of which is an accumulation point of interior points,[11]following his former masterMauro Picone:[14]according to this convention, if a setAis a region then itsclosureAis a domain.[11]
|
https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)
|
Online transaction processing(OLTP) is a type ofdatabasesystem used in transaction-oriented applications, such as many operational systems. "Online" refers to the fact that such systems are expected to respond to user requests and process them in real-time (process transactions). The term is contrasted withonline analytical processing(OLAP) which instead focuses on data analysis (for exampleplanningandmanagement systems).
The term "transaction" can have two different meanings, both of which might apply: in the realm of computers ordatabase transactionsit denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities (as used by, e.g.,Transaction Processing Performance Councilorcommercial transactions.[1]): 50OLTP may use transactions of the first type to record transactions of the second type.
OLTP is typically contrasted toonline analytical processing(OLAP), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. Whereas OLTP systems process all kinds of queries (read, insert, update and delete), OLAP is generally optimized for read only and might not even support other kinds of queries. OLTP also operates differently frombatch processingandgrid computing.[1]: 15
In addition, OLTP is often contrasted toonline event processing(OLEP), which is based on distributedevent logsto offer strong consistency in large-scale heterogeneous systems.[2]Whereas OLTP is associated with short atomic transactions, OLEP allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. Anautomated teller machine(ATM) for a bank is an example of a commercial transaction processing application.[3]Online transaction processing applications have high throughput and are insert- or update-intensive in database management. These applications are used concurrently by hundreds of users. The key goals of OLTP applications are availability, speed, concurrency and recoverability (durability).[4]Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of an online transaction processing system.
An OLTP system is an accessible data processing system in today's enterprises. Some examples of OLTP systems include order entry, retail sales, and financial transaction systems.[5]Online transaction processing systems increasingly require support for transactions that span a network and may include more than one company. For this reason, modern online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network.
In large applications, efficient OLTP may depend on sophisticated transaction management software (such as IBMCICS) and/ordatabaseoptimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database.
For even more demanding decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on anetwork. OLTP is often integrated intoservice-oriented architecture(SOA) andWeb services.
Online transaction processing (OLTP) involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. As of today, most organizations use a database management system to support OLTP. OLTP is carried in a client-server system.
Online transaction process concerns about concurrency and atomicity. Concurrency controls guarantee that two users accessing the same data in the database system will not be able to change that data or the user has to wait until the other user has finished processing, before changing that piece of data. Atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. That is, if any steps between the transaction fail, all other steps must fail also.[6]
To build an OLTP system, a designer must know that the large number of concurrent users does not interfere with the system's performance. To increase the performance of an OLTP system, a designer must avoid excessive use of indexes and clusters.
The following elements are crucial for the performance of OLTP systems:[4]
|
https://en.wikipedia.org/wiki/Online_transaction_processing
|
Thecute cat theory of digital activismis atheoryconcerningInternet activism,Web censorship, and "cute cats" (a term used for any low-value, but popular online activity) developed byEthan Zuckermanin 2008.[1][2]It posits that most people are not interested in activism; instead, they want to use thewebfor mundane activities, including surfing forpornographyandlolcats("cute cats").[3]The tools that they develop for that (such asFacebook,Flickr,Blogger,Twitter, and similar platforms) are very useful tosocial movementactivists because they may lack resources to develop dedicated tools themselves.[3]This, in turn, makes theactivistsmore immune to reprisals by governments than if they were using a dedicated activism platform, because shutting down a popular public platform provokes a larger public outcry than shutting down an obscure one.[3]
Zuckerman states that "Web 1.0was invented to allow physicists to share research papers.Web 2.0was created to allow people to share pictures of cute cats."[3]Zuckerman says that if a tool has "cute cat" purposes, and is widely used for low-value purposes, it can be and likely is used for online activism, too.[3]
If the government chooses to shut down such generic tools, it will hurt people's ability to "look at cute cats online", spreading dissent and encouraging the activists' cause.[2][3]
According to Zuckerman,internet censorship in the People's Republic of China, which relies on its own, self-censored, Web 2.0 sites, is able to circumvent the cute-cat problem becausethe governmentis able to provide people with access to cute-cat content on domestic,self-censoredsites while blocking access to Western sites, which are less popular in China than in many other places worldwide.[3][4]
"Sufficiently usable read/write platforms will attract porn and activists. If there's no porn, the tool doesn't work. If there are no activists, it doesn't work well," Zuckerman has stated.[3]
|
https://en.wikipedia.org/wiki/Cute_cat_theory_of_digital_activism
|
The followingoutlineis provided as an overview of and topical guide to cryptography:
Cryptography(orcryptology) – practice and study of hidinginformation. Modern cryptography intersects the disciplines ofmathematics,computer science, andengineering. Applications of cryptography includeATM cards,computer passwords, andelectronic commerce.
List of cryptographers
|
https://en.wikipedia.org/wiki/Topics_in_cryptography
|
Media intelligenceusesdata mininganddata scienceto analyze public,socialand editorialmedia content. It refers to marketing systems that synthesize billions ofonline conversationsinto relevant information. This allow organizations to measure and manage content performance, understand trends, and drive communications andbusiness strategy.
Media intelligence can includesoftware as a serviceusingbig dataterminology.[1]This includes questions about messaging efficiency,share of voice, audience geographical distribution, message amplification,influencerstrategy, journalist outreach, creative resonance, and competitor performance in all these areas.
Media intelligence differs frombusiness intelligencein that it uses and analyzes data outside companyfirewalls. Examples of that data areuser-generated contenton social media sites,blogs, comment fields, and wikis etc. It may also include other public data sources likepress releases, news, blogs, legal filings, reviews and job postings.
Media intelligence may also include competitive intelligence, wherein information that is gathered from publicly available sources such as social media, press releases, and news announcements are used to better understand the strategies and tactics being deployed by competing businesses.[2]
Media intelligence is enhanced by means of emerging technologies likeambient intelligence,machine learning,semantic tagging,natural language processing,sentiment analysisandmachine translation.
Different media intelligence platforms use different technologies formonitoring, curating content, engaging with content, data analysis and measurement of communications and marketing campaign success. These technology providers may obtain content by scraping content directly from websites or by connecting to the API provided by social media, or other content platforms that are created for 3rd party developers to develop their own applications and services that access data. Technology companies may also get data from a data reseller.
Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others archive and index social media posts to provide end users with on-demand access to historical data and enable methodologies and technologies leveraging network and relational data. Additional monitoring companies use crawlers and spidering technology to find keyword references, known assemantic analysisornatural language processing. Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it.[3]
|
https://en.wikipedia.org/wiki/Media_intelligence
|
Apolymorphic engine(sometimes calledmutation engineormutating engine) is asoftware componentthat usespolymorphic codeto alter thepayloadwhile preserving the same functionality.
Polymorphicenginesare used almost exclusively inmalware, with the purpose of being harder forantivirus softwareto detect. They do so either byencryptingorobfuscatingthe malware payload.
One common deployment is afile binderthat weaves malware into normalfiles, such as office documents. Since this type of malware is usually polymorphic, it is also known as apolymorphic packer.
The engine of theVirutbotnetis an example of a polymorphic engine.[1]
|
https://en.wikipedia.org/wiki/Polymorphic_engine
|
Incomputer science,instruction schedulingis acompiler optimizationused to improveinstruction-level parallelism, which improves performance on machines withinstruction pipelines. Put more simply, it tries to do the following without changing the meaning of the code:
The pipeline stalls can be caused by structural hazards (processor resource limit), data hazards (output of one instruction needed by another instruction) and control hazards (branching).
Instruction scheduling is typically done on a singlebasic block. In order to determine whether rearranging the block's instructions in a certain way preserves the behavior of that block, we need the concept of adata dependency. There are three types of dependencies, which also happen to be the threedata hazards:
Technically, there is a fourth type, Read after Read (RAR or "Input"): Both instructions read the same location. Input dependence does not constrain the execution order of two statements, but it is useful in scalar replacement of array elements.
To make sure we respect the three types of dependencies, we construct a dependency graph, which is adirected graphwhere each vertex is an instruction and there is an edge from I1to I2if I1must come before I2due to a dependency. If loop-carried dependencies are left out, the dependency graph is adirected acyclic graph. Then, anytopological sortof this graph is a valid instruction schedule. The edges of the graph are usually labelled with thelatencyof the dependence. This is the number of clock cycles that needs to elapse before the pipeline can proceed with the target instruction without stalling.
The simplest algorithm to find a topological sort is frequently used and is known aslist scheduling. Conceptually, it repeatedly selects a source of the dependency graph, appends it to the current instruction schedule and removes it from the graph. This may cause other vertices to be sources, which will then also be considered for scheduling. The algorithm terminates if the graph is empty.
To arrive at a good schedule, stalls should be prevented. This is determined by the choice of the next instruction to be scheduled. A number of heuristics are in common use:
Instruction scheduling may be done either before or afterregister allocationor both before and after it. The advantage of doing it before register allocation is that this results in maximum parallelism. The disadvantage of doing it before register allocation is that this can result in the register allocator needing to use a number of registers exceeding those available. This will cause spill/fill code to be introduced, which will reduce the performance of the section of code in question.
If the architecture being scheduled has instruction sequences that have potentially illegal combinations (due to a lack of instruction interlocks), the instructions must be scheduled after register allocation. This second scheduling pass will also improve the placement of the spill/fill code.
If scheduling is only done after register allocation, then there will be false dependencies introduced by the register allocation that will limit the amount of instruction motion possible by the scheduler.
There are several types of instruction scheduling:
TheGNU Compiler Collectionis one compiler known to perform instruction scheduling, using the-march(both instruction set and scheduling) or-mtune(only scheduling) flags. It uses descriptions of instruction latencies and what instructions can be run in parallel (or equivalently, which "port" each use) for each microarchitecture to perform the task. This feature is available to almost all architectures that GCC supports.[2]
Until version 12.0.0, the instruction scheduling inLLVM/Clang could only accept a-march(calledtarget-cpuin LLVM parlance) switch for both instruction set and scheduling. Version 12 adds support for-mtune(tune-cpu) for x86 only.[3]
Sources of information on latency and port usage include:
LLVM'sllvm-exegesisshould be usable on all machines, especially to gather information on non-x86 ones.[6]
|
https://en.wikipedia.org/wiki/Superblock_scheduling
|
Pruningis adata compressiontechnique inmachine learningandsearch algorithmsthat reduces the size ofdecision treesby removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the finalclassifier, and hence improves predictive accuracy by the reduction ofoverfitting.
One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risksoverfittingthe training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as thehorizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.[1]
Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by across-validationset. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Pruning processes can be divided into two types (pre- and post-pruning).
Pre-pruningprocedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion.
Post-pruning(or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall.
The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up).
These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method.
These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP).
In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items.
One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage ofsimplicity and speed.
Cost complexity pruning generates a series of treesT0…Tm{\displaystyle T_{0}\dots T_{m}}whereT0{\displaystyle T_{0}}is the initial tree andTm{\displaystyle T_{m}}is the root alone. At stepi{\displaystyle i}, the tree is created by removing a subtree from treei−1{\displaystyle i-1}and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows:
The functionprune(T,t){\displaystyle \operatorname {prune} (T,t)}defines the tree obtained by pruning the subtreest{\displaystyle t}from the treeT{\displaystyle T}. Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.
Pruning could be applied in acompression schemeof a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
|
https://en.wikipedia.org/wiki/Pruning_(algorithm)
|
Acceptoften refers to:
Acceptcan also refer to:
|
https://en.wikipedia.org/wiki/Accept_(disambiguation)
|
Defense in depthis a concept used ininformation securityin which multiple layers of security controls (defense) are placed throughout aninformation technology(IT) system. Its intent is to provideredundancyin the event asecurity controlfails or a vulnerability is exploited that can cover aspects ofpersonnel,procedural,technicalandphysicalsecurity for the duration of the system's life cycle.
The idea behind the defense in depth approach is to defend a system against any particular attack using several independent methods.[1]It is a layering tactic, conceived[2]by theNational Security Agency(NSA) as a comprehensive approach to information and electronic security.[3][4]
An insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, andnetwork security, host-based security, andapplication securityforming the outermost layers of the onion.[5]Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy.[6]
Defense in depth can be divided into three areas: Physical, Technical, and Administrative.[7]
Physical controls[3]are anything that physically limits or prevents access to IT systems. Examples of physical defensive security are: fences, guards, dogs, andCCTVsystems.
Technical controls are hardware or software whose purpose is to protect systems and resources. Examples of technical controls would be disk encryption, file integrity software, and authentication. Hardware technical controls differ from physical controls in that they prevent access to the contents of a system, but not the physical systems themselves.
Administrative controls are the organization's policies and procedures. Their purpose is to ensure that there is proper guidance available in regard to security and that regulations are met. They include things such as hiring practices, data handling procedures, and security requirements.
Using more than one of the following layers constitutes an example of defense in depth.
|
https://en.wikipedia.org/wiki/Defense_in_depth_(computing)
|
Dynamic program analysisis the act ofanalyzing softwarethat involves executing aprogram– as opposed tostatic program analysis, which does not execute it.
Analysis can focus on different aspects of the software including but not limited to:behavior,test coverage,performanceandsecurity.
To be effective, the target program must be executed with sufficient test inputs[1]to address the ranges of possible inputs and outputs.Software testingmeasures, such ascode coverage, and tools such asmutation testing, are used to identify where testing is inadequate.
Functional testing includes relatively commonprogrammingtechniques such asunit testing,integration testingandsystem testing.[2]
Computing thecode coverageof a test identifies code that is not tested; not covered by a test.
Although this analysis identifies code that is not tested it does not determine whether tested coded isadequatelytested. Code can be executed even if the tests do not actually verify correct behavior.
Dynamic testing involves executing a program on a set of test cases.
Fuzzing is a testing technique that involves executing a program on a wide variety of inputs; often these inputs are randomly generated (at least in part).Gray-box fuzzersuse code coverage to guide input generation.
Dynamic symbolic execution (also known asDSEor concolic execution) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using aconstraint solver(generally, anSMT solver) to generate new inputs that would cause the program to take a different control-flow path, thus increasing code coverage of the test suite.[3]DSE can be considered a type offuzzing("white-box" fuzzing).
Dynamic data-flow analysis tracks the flow of information fromsourcestosinks. Forms of dynamic data-flow analysis include dynamic taint analysis and evendynamic symbolic execution.[4][5]
Daikonis an implementation of dynamic invariant detection. Daikon runs a program, observes the values that
the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions.
Dynamic analysis can be used to detect security problems.
For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset.
Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors.
Mostperformance analysis toolsuse dynamic program analysis techniques.[citation needed]
Most dynamic analysis involvesinstrumentationor transformation.
Since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem.
DynInst is a runtime code-patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. Dyninst does not requiresource codeor recompilation in general, however, non-stripped executables and executables with debugging symbols are easier to instrument.
Iroh.jsis a runtime code analysis library forJavaScript. It keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program's execution behavior.
|
https://en.wikipedia.org/wiki/Dynamic_program_analysis
|
Acrash-only softwareis acomputer programthat handle failures by simply restarting, without attempting any sophisticated recovery.[1]Correctly written components of crash-only software canmicrorebootto a known-good state without the help of a user. Since failure-handling and normal startup use the same methods, this can increase the chance that bugs in failure-handling code will be noticed,[clarification needed]except when there are leftover artifacts, such asdata corruptionfrom a severe failure, that don't occur during normal startup.[citation needed]
Crash-only software also has benefits for end-users. All too often, applications do not save their data and settings while running, only at the end of their use. For example,word processorsusually save settings when they are closed. A crash-only application is designed to save all changed user settings soon after they are changed, so that thepersistent statematches that of the running machine. No matter how an application terminates (be it a clean close or the sudden failure of a laptop battery), the state will persist.
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Crash-only_software
|
TrackRwas a commercialkey finderthat assisted in the tracking of lost belongings and devices.[1]Trackr was produced by the company Phone Halo[2]and was inspired by the founders' losing their keys on a beach during a surfing trip.[3]
The founders ofPhone Halobegan working on TrackR in 2009. In 2010, they founded the company and launched the product.[4]In Winter 2018, TrackR rebranded itself toAdero, as part of changing its focus to other uses for its tracking technology, taking TrackR beyond the Bluetooth fobs that had been the core of its service.[5]TrackR shut down its services and removed its apps in August 2021.[6]
The device contains a lithium battery that needs to be changed about once a year by the user. It communicates its current location viaBluetooth4.0, to an Android 4.4+ or iOS 8.0+ mobile device on which the TrackR app is installed and running. This feature is referred to as "Crowd Locate", since each device will report its location to all other TrackR devices in range, including those that are neither owned nor registered by the user. This feature is useful because the app must be installed and running on a nearby Bluetooth enabled device for any device's location to be relayed.
As of August 2017, over 5 million TrackR devices had been sold.[3]
As of August 2021, the official website stated that the manufacturer has discontinued App support for both Apple and Android devices.
ForTrackr Bravo, the producer published the following data as of August 2017:[7]
|
https://en.wikipedia.org/wiki/TrackR
|
Incomputer programming,explicit parallelismis the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.[1]Most parallel primitives are related to process synchronization, communication and process partitioning.[2]As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.
The advantage of explicitparallel programmingis increased programmer
control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known asimplicit parallelism.
Some of the programming languages that support explicit parallelism are:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Explicit_parallelism
|
Journalology(also known aspublication science) is the scholarly study of all aspects of theacademic publishingprocess.[1][2]The field seeks to improve the quality of scholarly research by implementingevidence-based practicesin academic publishing.[3]The term "journalology" was coined byStephen Lock, the formereditor-in-chiefofthe BMJ. The first Peer Review Congress, held in 1989 inChicago,Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[3]The field of journalology has been influential in pushing for studypre-registrationin science, particularly inclinical trials.Clinical trial registrationis now expected in most countries.[3]Journalology researchers also work to reform thepeer reviewprocess.
The earliest scientific journals were founded in the seventeenth century. While most early journals usedpeer review, peer review did not become common practice in medical journals until afterWorld War II.[4]The scholarly publishing process (including peer review) did not arise by scientific means and still suffers from problems with reliability (consistency and dependability),[5]such as a lack of uniform standards and validity (well-founded, efficacious).[6][7]Attempts to reform the academic publishing practice began to gain traction in the late twentieth century.[8]The field of journalology was formally established in 1989.[3]
|
https://en.wikipedia.org/wiki/Journalology
|
Magnetic logicisdigital logicmade using the non-linear properties of woundferrite cores.[1]Magnetic logic represents 0 and 1 by magnetising cores clockwise or anticlockwise.[2]
Examples of magnetic logic includecore memory. Also, AND, OR, NOT and clocked shift logic gates can be constructed using appropriate windings, and the use of diodes.
A complete computer called theALWAC 800was constructed using magnetic logic, but it was not commercially successful.
TheElliott 803computer used a combination of magnetic cores (for logic function) and germanium transistors (as pulse amplifiers) for its CPU. It was a commercial success.
William F. Steagall of theSperry-Rand corporationdeveloped the technology in an effort to improve the reliability of computers. In his patent application,[3]filed in 1954, he stated:
"Where, as here, reliability of operation is a factor of prime importance, vacuum tubes, even though acceptable for most present-day electronic applications, are faced with accuracy requirements of an entirely different order of magnitude. For example, if two devices each having 99.5% reliability response are both utilized in a combined relationship in a given device, that device will have an accuracy or reliability factor of .995 × .995 = 99%. If ten such devices are combined, the factor drops to 95.1%. If, however, 500 such units are combined, the reliability factor of the device drops to 8.1%, and for a thousand, to 0.67%. It will thus be seen that even though the reliability of operation of individual vacuum tubes may be very much above 99.95%, where many thousands of units are combined, as in the large computers, the reliability factor of each unit must be extremely high to combine to produce an error free device. In practice of course such an ideal can only be approached. Magnetic amplifiers of the type here described meet the necessary requirements of reliability of performance for the combinations discussed."
Magnetic logic was able to achieve switching speeds of about 1MHz but was overtaken by semiconductor based electronics which was able to switch much faster.
Solid state semiconductors were able to increase their density according toMoore's Law, and thus proved more effective as IC technology developed.
Magnetic logic has advantages in that it is not volatile, it may be powered down without losing its state.[1]
|
https://en.wikipedia.org/wiki/Magnetic_logic
|
Rigetti Computing, Inc.is aBerkeley, California-based developer of Superconducting quantumintegrated circuitsused forquantum computers. Rigetti also develops a cloud platform called Forest that enables programmers to write quantum algorithms.[2]
Rigetti Computing was founded in 2013 byChad Rigetti, a physicist with a background in quantum computers fromIBM, and studied underMichel Devoret.[2][3]The company emerged from startup incubatorY Combinatorin 2014 as a so-called "spaceshot" company.[4][5]Later that year, Rigetti also participated in The Alchemist Accelerator, a venture capital programme.[5]
By February 2016, Rigetti created its firstquantum processor, a three-qubitchip made using aluminum circuits on a silicon wafer.[6]That same year, Rigetti raisedSeries Afunding of US$24 million in a round led byAndreessen Horowitz. In November, the company secured Series B funding of $40 million in a round led by investment firm Vy Capital, along with additional funding fromAndreessen Horowitzand other investors. Y Combinator also participated in both rounds.[5]
By Spring of 2017, Rigetti had advanced to testing eight-qubit quantum computers.[3]In June, the company announced the release of Forest 1.0, a quantum computing platform designed to enable developers to create quantum algorithms.[2]This was a major milestone.
In October 2021, Rigetti announced plans to go public via aSPAC merger, with estimated valuation of around US$1.5 billion.[7][8]This deal was expected to raise an additional US$458 million, bringing the total funding to US$658 million.[7]The fund will be used to accelerate the company's growth, including scaling its quantum processors from 80 qubits to 1,000 qubits by 2024, and to 4,000 by 2026.[9]The SPAC deal closed on 2 March 2022, and Rigetti began trading on the NASDAQ under the ticker symbol RGTI.[10]
In December 2022, Subodh Kulkarni became president and CEO of the company.[11]
In July 2023 Rigetti launched a single-chip 84qubitquantum processorthat can scale to even larger systems.[12]
Rigetti Computing is a full-stack quantum computing company, a term that indicates that the company designs and fabricates quantum chips, integrates them with a controlling architecture, and develops software for programmers to use to build algorithms for the chips.[13]
The company hosts a cloud computing platform called Forest, which gives developers access to quantum processors so they can write quantum algorithms for testing purposes. The computing platform is based on a custom instruction language the company developed calledQuil, which stands for Quantum Instruction Language. Quil facilitates hybrid quantum/classical computing, and programs can be built and executed using open sourcePythontools.[13][14]As of June 2017, the platform allows coders to write quantum algorithms for a simulation of a quantum chip with 36 qubits.[2]
The company operates a rapid prototyping fabrication ("fab") lab called Fab-1, designed to quickly create integrated circuits. Lab engineers design and generate experimental designs for 3D-integrated quantum circuits for qubit-based quantum hardware.[13]
The company was recognized in 2016 byX-PrizefounderPeter Diamandisas being one of the three leaders in the quantum computing space, along with IBM andGoogle.[15]MIT Technology Reviewnamed the company one of the 50 smartest companies of 2017.[16]
Rigetti Computing is headquartered in Berkeley, California, where it hosts developmental systems and cooling equipment.[15]The company also operates its Fab-1 manufacturing facility in nearby Fremont.[2]
|
https://en.wikipedia.org/wiki/Rigetti_Computing
|
Incombinatorialmathematics, alarge setofpositive integers
is one such that theinfinite sumof the reciprocals
diverges. Asmall setis any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges.
Large sets appear in theMüntz–Szász theoremand in theErdős conjecture on arithmetic progressions.
Paul Erdősconjecturedthat all large sets contain arbitrarily longarithmetic progressions. He offered a prize of $3000 for a proof, more than for any of hisother conjectures, and joked that this prize offer violated the minimum wage law.[1]The question is still open.
It is not known how to identify whether a given set is large or small in general. As a result, there are many sets which are not known to be either large or small.
|
https://en.wikipedia.org/wiki/Large_set_(combinatorics)
|
Mathematical puzzlesmake up an integral part ofrecreational mathematics. They have specific rules, but they do not usually involve competition between two or more players. Instead, to solve such apuzzle, the solver must find a solution that satisfies the given conditions. Mathematical puzzles require mathematics to solve them.Logic puzzlesare a common type of mathematical puzzle.
Conway's Game of Lifeandfractals, as two examples, may also be considered mathematical puzzles even though the solver interacts with them only at the beginning by providing a set of initial conditions. After these conditions are set, the rules of the puzzle determine all subsequent changes and moves. Many of the puzzles are well known because they were discussed byMartin Gardnerin his "Mathematical Games" column in Scientific American. Mathematical puzzles are sometimes used to motivate students in teaching elementary schoolmath problemsolving techniques.[1]Creative thinking– or "thinking outside the box" – often helps to find the solution.
The fields ofknot theoryandtopology, especially their non-intuitive conclusions, are often seen as a part of recreational mathematics.
|
https://en.wikipedia.org/wiki/Mathematical_puzzle
|
Autocephaly recognized by some autocephalous Churchesde jure:
Autocephaly and canonicity recognized by Constantinople and 3 other autocephalous Churches:
Spiritual independence recognized by Georgian Orthodox Church:
Semi-Autonomous:
In theEastern Orthodox Church,Catholic Church,[1]and in the teachings of theChurch Fatherswhich undergirds thetheologyof those communions,economyoroeconomy(Greek:οἰκονομία,oikonomia) has several meanings.[2]The basic meaning of the word is "handling" or "disposition" or "management" of a thing, or more literally "housekeeping", usually assuming or implyinggoodorprudenthandling (as opposed topoorhandling) of the matter at hand. In short,economiais a discretionary deviation from the letter of the law in order to adhere to the spirit of thelawandcharity. This is in contrast tolegalism, orakribia(Greek:ακριβεια), which is strict adherence to the letter of the law of the church.
The divine economy, in Eastern Orthodoxy, not only refers to God's actions to bring about the world'ssalvationandredemption, but toallof God's dealings with, and interactions with, the world, including the Creation.[3][verification needed]
According toLossky,theology(literally, "words about God" or "teaching about God") was concerned with all that pertains to God alone, in himself, i.e. the teaching on theTrinity, thedivine attributes, and so on; but it was not concerned with anything pertaining to the creation or the redemption. Lossky writes: "The distinction betweenοικονομια[economy] andθεολογια[theology] [...] remains common to most of the GreekFathersand to all of theByzantinetradition.θεολογια[...] means, in the fourth century, everything which can be said of God considered in Himself, outside of His creative and redemptive economy. To reach this 'theology' properly so-called, one therefore must go beyond [...] God as Creator of the universe, in order to be able to extricate the notion of the Trinity from the cosmological implications proper to the 'economy.' "[3]
TheEcumenical Patriarchateconsiders that through "extreme oikonomia [economy]", those who arebaptizedin theOriental Orthodox, Roman Catholic,Lutheran,Old Catholic,Moravian,Anglican,Methodist,Reformed,Presbyterian,Church of the Brethren,Assemblies of God, orBaptisttraditions can be received into the Eastern Orthodox Church through the sacrament ofChrismationand not throughre-baptism.[4]
In thecanon law of the Eastern Orthodox Church, the notions ofakriveiaandeconomia(economy) also exist.Akriveia, which is harshness, "is the strict application (sometimes even extension) of thepenancegiven to an unrepentant and habitual offender."Economia, which is sweetness, "is a judicious relaxation of the penance when the sinner shows remorse andrepentance."[5]
According to the Catechism of the Catholic Church:[6]
The Fathers of the Church distinguish between theology (theologia) and economy (oikonomia). "Theology" refers to the mystery of God's inmost life within the Blessed Trinity and "economy" to all the works by which God reveals himself and communicates his life. Through the oikonomia the theologia is revealed to us; but conversely, the theologia illuminates the whole oikonomia. God's works reveal who he is in himself; the mystery of his inmost being enlightens our understanding of all his works. So it is, analogously, among human persons. A person discloses himself in his actions, and the better we know a person, the better we understand his actions.
|
https://en.wikipedia.org/wiki/Economy_(religion)
|
Inprogramming language theory,semanticsis the rigorous mathematical study of the meaning ofprogramming languages.[1]Semantics assignscomputationalmeaning to validstringsin aprogramming language syntax. It is closely related to, and often crosses over with, thesemantics of mathematical proofs.
Semanticsdescribes the processes a computer follows whenexecutinga program in that specific language. This can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certainplatform, thereby creating amodel of computation.
In 1967,Robert W. Floydpublished the paperAssigning meanings to programs; his chief aim was "a rigorous standard for proofs about computer programs, includingproofs of correctness, equivalence, and termination".[2][3]Floyd further wrote:[2]
A semantic definition of a programming language, in our approach, is founded on asyntacticdefinition. It must specify which of the phrases in a syntactically correct program representcommands, and whatconditionsmust be imposed on an interpretation in the neighborhood of each command.
In 1969,Tony Hoarepublished a paper onHoare logicseeded by Floyd's ideas, now sometimes collectively calledaxiomatic semantics.[4][5]
In the 1970s, the termsoperational semanticsanddenotational semanticsemerged.[5]
The field of formal semantics encompasses all of the following:
It has close links with other areas ofcomputer sciencesuch asprogramming language design,type theory,compilersandinterpreters,program verificationandmodel checking.
There are many approaches to formal semantics; these belong to three major classes:
Apart from the choice between denotational, operational, or axiomatic approaches, most variations in formal semantic systems arise from the choice of supporting mathematical formalism.[citation needed]
Some variations of formal semantics include the following:
For a variety of reasons, one might wish to describe the relationships between different formal semantics. For example:
It is also possible to relate multiple semantics throughabstractionsvia the theory ofabstract interpretation.[citation needed]
|
https://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
|
Anillegal opcode, also called anunimplemented operation,[1]unintended opcode[2]orundocumented instruction, is aninstructionto aCPUthat is not mentioned in any official documentation released by the CPU's designer or manufacturer, which nevertheless has an effect. Illegal opcodes were common on older CPUs designed during the 1970s, such as theMOS Technology6502,Intel8086, and theZilogZ80. Unlike modern processors, those older processors have a very limited transistor budget, and thus to save space their designers often omitted circuitry to detect invalid opcodes and generate atrapto an error handler. The operation of many of these opcodes happens as aside effectof the wiring oftransistorsin the CPU, and usually combines functions of the CPU that were not intended to be combined. On old and modern processors, there are also instructions intentionally included in the processor by the manufacturer, but that are not documented in any official specification.
While most accidental illegal instructions have useless or even highly undesirable effects (such as crashing the computer), some can have useful functions in certain situations. Such instructions were sometimes exploited incomputer gamesof the 1970s and 1980s to speed up certain time-critical sections. Another common use was in the ongoing battle betweencopy protectionimplementations andcracking. Here, they were a form ofsecurity through obscurity, and their secrecy usually did not last very long.
A danger associated with the use of illegal instructions was that, given the fact that the manufacturer does not guarantee their existence and function, they might disappear or behave differently with any change of the CPU internals or any new revision of the CPU, rendering programs that use them incompatible with the newer revisions. For example, a number of olderApple IIgames did not work correctly on the newerApple IIc, because the latter used a newer CPU revision –65C02– that did away with illegal opcodes.
Later CPUs, such as the80186,80286,68000and its descendants, do not have illegal opcodes that are widely known/used. Ideally, the CPU will behave in a well-defined way when it finds an unknown opcode in the instruction stream, such as triggering a certainexceptionorfaultcondition. Theoperating system's exception or fault handler will then usually terminate the application that caused the fault, unless the program had previously established its own exception/fault handler, in which case that handler would receive control. Another, less common way of handling illegal instructions is by defining them to do nothing except taking up time and space (equivalent to the CPU's officialNOPinstruction); this method is used by theTMS9900and65C02processors, among others.Alternatively, unknown instructions can be emulated in software (e.g.LOADALL), or even "new" pseudo-instructions can be implemented. SomeBIOSes, memory managers, and operating systems take advantage of this, for example, to let V86 tasks communicate with the underlying system, i.e. BOP (from "BIOS Operation") utilized by the WindowsNTVDM.[3]
In spite of Intel's guarantee against such instructions, research using techniques such asfuzzinguncovered a vast number of undocumented instructions in x86 processors as late as 2018.[4]Some of these instructions are shared across processor manufacturers, indicating that Intel andAMDare both aware of the instruction and its purpose, despite it not appearing in any official specification. Other instructions are specific to manufacturers or specific product lines. The purpose of the majority of x86 undocumented instructions is unknown.
Today, the details of these instructions are mainly of interest for exactemulationof older systems.
|
https://en.wikipedia.org/wiki/Unintended_instructions
|
Indigital electronics, aNAND(NOT AND)gateis alogic gatewhich produces an output which is false only if all its inputs are true; thus its output iscomplementto that of anAND gate. A LOW (0) output results only if all the inputs to the gate are HIGH (1); if any input is LOW (0), a HIGH (1) output results. A NAND gate is made using transistors and junction diodes. ByDe Morgan's laws, a two-input NAND gate's logic may be expressed asA¯∨B¯=A⋅B¯{\displaystyle {\overline {A}}\lor {\overline {B}}={\overline {A\cdot B}}}, making a NAND gate equivalent toinvertersfollowed by anOR gate.
The NAND gate is significant because anyBoolean functioncan be implemented by using a combination of NAND gates. This property is called "functional completeness". It shares this property with theNOR gate. Digital systems employing certain logic circuits take advantage of NAND's functional completeness.
NAND gates with two or more inputs are available asintegrated circuitsintransistor–transistor logic,CMOS, and otherlogic families.
There are three symbols for NAND gates: theMIL/ANSIsymbol, theIECsymbol and the deprecatedDINsymbol sometimes found on old schematics. The ANSI symbol for the NAND gate is a standard AND gate with an inversion bubble connected.
The functionNAND(a1,a2, ...,an)islogically equivalenttoNOT(a1ANDa2AND ... ANDan).
One way of expressing A NAND B isA∧B¯{\displaystyle {\overline {A\land B}}}, where the symbol∧{\displaystyle {\land }}signifies AND and the bar signifies the negation of the expression under it: in essence, simply¬(A∧B){\displaystyle {\displaystyle \lnot (A\land B)}}.
The basic implementations can be understood from the image on the left below: If either of the switches S1 or S2 is open, thepull-up resistorR will set the output signal Q to 1 (high). If S1 and S2 are both closed, the pull-up resistor will be overridden by the switches, and the output will be 0 (low).
In thedepletion-load NMOS logicrealization in the middle below, the switches are the transistors T2 and T3, and the transistor T1 fulfills the function of the pull-up resistor.
In theCMOSrealization on the right below, the switches are then-typetransistors T3 and T4, and the pull-up resistor is made up of thep-typetransistors T1 and T2, which form the complement of transistors T3 and T4.
In CMOS, NAND gates are more efficient thanNOR gates. This is due to the faster charge mobility in n-MOSFETs compared to p-MOSFETs, so that the parallel connection of two p-MOSFETs (T1 and T2) realised in the NAND gate is more favourable than their series connection in the NOR gate. For this reason, NAND gates are generally preferred over NOR gates in CMOS circuits.[1]
NAND gates are basic logic gates, and as such they are recognised inTTLandCMOSICs.
The standard,4000 series,CMOSICis the 4011, which includes four independent, two-input, NAND gates. These devices are available from many semiconductor manufacturers. These are usually available in both through-holeDILandSOICformats. Datasheets are readily available in mostdatasheet databases.
The standard two-, three-, four- and eight-input NAND gates are available:
The NAND gate has the property offunctional completeness, which it shares with theNOR gate. That is, any other logic function (AND, OR, etc.) can be implemented using only NAND gates.[2]An entire processor can be created using NAND gates alone. In TTL ICs using multiple-emittertransistors, it also requires fewer transistors than a NOR gate.
As NOR gates are also functionally complete, if no specific NAND gates are available, one can be made fromNORgates usingNOR logic.[2]
|
https://en.wikipedia.org/wiki/NAND_gate
|
Innatural language processing, asentence embeddingis a representation of a sentence as avectorof numbers which encodes meaningful semantic information.[1][2][3][4][5][6][7]
State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models.BERTpioneered an approach involving the use of a dedicated [CLS] token prepended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings.SBERTlater achieved superior sentence embedding performance[8]by fine tuning BERT's [CLS] token embeddings through the usage of asiamese neural networkarchitecture on the SNLI dataset.
Other approaches are loosely based on the idea ofdistributional semanticsapplied to sentences.Skip-Thoughttrains an encoder-decoder structure for the task of neighboring sentences predictions; this has been shown to achieve worse performance than approaches such asInferSentor SBERT.
An alternative direction is to aggregate word embeddings, such as those returned byWord2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW).[9]However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE),[10]which demonstrated performance improvements in downstream text classification tasks.
In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search.LangChainfor instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information forquestion answeringtasks. This approach is also known formally asretrieval-augmented generation[11]
Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing aLarge language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automatehyperparameter optimization[citation needed].
A way of testing sentence encodings is to apply them on Sentences Involving Compositional Knowledge (SICK) corpus[12]for both entailment (SICK-E) and relatedness (SICK-R).
In[13]the best results are obtained using aBiLSTM networktrained on theStanford Natural Language Inference (SNLI) Corpus. ThePearson correlation coefficientfor SICK-R is 0.885 and the result for SICK-E is 86.3. A slight improvement over previous scores is presented in:[14]SICK-R: 0.888 and SICK-E: 87.8 using a concatenation of bidirectionalGated recurrent unit.
|
https://en.wikipedia.org/wiki/Sentence_embedding
|
This is a list ofnotableapplications(apps) that run on theAndroid platformwhich meet guidelines forfree softwareandopen-source software.
There are a number of third-party maintained lists of open-source Android applications, including:
|
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications
|
Advanced planning and scheduling(APS, also known asadvanced manufacturing) refers to amanufacturing management processby whichraw materialsand production capacity are optimally allocated to meet demand.[1]APS is especially well-suited to environments where simpler planning methods cannot adequately address complex trade-offs between competing priorities. Production scheduling is intrinsically very difficult due to the (approximately)factorialdependence of the size of the solution space on the number of items/products to be manufactured.
Traditionalproduction planningandschedulingsystems (such asmanufacturing resource planning) use a stepwise procedure to allocate material and production capacity. This approach is simple but cumbersome, and does not readily adapt to changes in demand, resource capacity or material availability. Materials and capacity are planned separately, and many systems do not consider material or capacity constraints, leading to infeasible plans. However, attempts to change to the new system have not always been successful, which has called for the combination of management philosophy with manufacturing.
Unlike previous systems, APS simultaneously plans and schedules production based on available materials, labor and plant capacity.
APS has commonly been applied where one or more of the following conditions are present:
Advanced planning & scheduling software enables manufacturing scheduling and advanced scheduling optimization within these environments.
[1]
|
https://en.wikipedia.org/wiki/Advanced_planning_and_scheduling
|
Many countries around the world maintainmarinesandnaval infantrymilitary units. Even if only a few nations have the capabilities to launch major amphibious assault operations, most marines and naval infantry forces are able to carry out limitedamphibious landings, riverine andcoastal warfaretasks. The list includes also army units specifically trained to operate as marines or naval infantry forces, and navy units with specialized naval security and boarding tasks.
TheMarine Fusiliers Regimentsare the marine infantry regiments of theAlgerian Navyand they are specialised inamphibious warfare.[1]
The RFM have about 7000 soldiers in their ranks.
Within the Algerian navy there are 8 regiments of marine fusiliers:
Future marine fusiliers andmarine commandosare trained in:
Army
Navy
Army
Navy
The IDF's35th Parachute Brigade "Flying Serpent"is aparatroopersbrigade that also exercises sea landing capabilities.
The Italian Army'sCavalry Brigade "Pozzuolo del Friuli"forms with theItalian Navy's 3rd Naval Division andSan Marco Marine BrigadetheItalian military's National Sea Projection Capability (Forza di proiezione dal mare).
Additionally the 17th Anti-aircraft Artillery Regiment "Sforzesca" provides air-defense assets:
|
https://en.wikipedia.org/wiki/List_of_marines_and_similar_forces
|
Derivative-free optimization(sometimes referred to asblackbox optimization) is a discipline inmathematical optimizationthat does not usederivativeinformation in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective functionfis unavailable, unreliable or impractical to obtain. For example,fmight be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them viafinite differencesare of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are calledderivative-free algorithms.[1]
The problem to be solved is to numerically optimize an objective functionf:A→R{\displaystyle f\colon A\to \mathbb {R} }for somesetA{\displaystyle A}(usuallyA⊂Rn{\displaystyle A\subset \mathbb {R} ^{n}}), i.e. findx0∈A{\displaystyle x_{0}\in A}such that without loss of generalityf(x0)≤f(x){\displaystyle f(x_{0})\leq f(x)}for allx∈A{\displaystyle x\in A}.
When applicable, a common approach is to iteratively improve a parameter guess by local hill-climbing in the objective function landscape. Derivative-based algorithms use derivative information off{\displaystyle f}to find a good search direction, since for example the gradient gives the direction of steepest ascent. Derivative-based optimization is efficient at finding local optima for continuous-domain smooth single-modal problems. However, they can have problems when e.g.A{\displaystyle A}is disconnected, or (mixed-)integer, or whenf{\displaystyle f}is expensive to evaluate, or is non-smooth, or noisy, so that (numeric approximations of) derivatives do not provide useful information. A slightly different problem is whenf{\displaystyle f}is multi-modal, in which case local derivative-based methods only give local optima, but might miss the global one.
In derivative-free optimization, various methods are employed to address these challenges using only function values off{\displaystyle f}, but no derivatives. Some of these methods can be proved to discover optima, but some are rather metaheuristic since the problems are in general more difficult to solve compared toconvex optimization. For these, the ambition is rather to efficiently find "good" parameter values which can be near-optimal given enough resources, but optimality guarantees can typically not be given. One should keep in mind that the challenges are diverse, so that one can usually not use one algorithm for all kinds of problems.
Notable derivative-free optimization algorithms include:
There exist benchmarks for blackbox optimization algorithms, see e.g. the bbob-biobj tests.[2]
|
https://en.wikipedia.org/wiki/Derivative-free_optimization
|
Dependency hellis acolloquial termfor the frustration of some software users who have installedsoftware packageswhich havedependencieson specificversionsof other software packages.[1]
The dependency issue arises when several packages have dependencies on the samesharedpackages or libraries, but they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages.
Dependency hell takes several forms:
On specificcomputing platforms, "dependency hell" often goes by a local specific name, generally the name of components.
|
https://en.wikipedia.org/wiki/Dependency_hell
|
Stratisis auser-spaceconfigurationdaemonthat configures and monitors existing components fromLinux's underlying storage components oflogical volume management(LVM) andXFSfilesystem viaD-Bus.
Stratis is not a user-levelfilesystemlike theFilesystem in Userspace(FUSE) system. Stratis configuration daemon was originally developed byRed Hatto have feature parity withZFSandBtrfs. The hope was due to Stratis configuration daemon being in userland, it would more quickly reach maturity versus the years of kernel level development of file systems ZFS and Btrfs.[2][3]It is built upon enterprise-tested components LVM and XFS with over a decade of enterprise deployments and the lessons learned from System Storage Manager inRed Hat Enterprise Linux7.[4]
Stratis provides ZFS/Btrfs-style features by integrating layers of existing technology: Linux'sdevice mappersubsystem, and the XFS filesystem. Thestratisddaemon manages collections of block devices, and provides a D-BusAPI. Thestratis-cliDNFpackageprovides a command-line toolstratis, which itself uses the D-Bus API to communicate withstratisd.
|
https://en.wikipedia.org/wiki/Stratis_(configuration_daemon)
|
Simicsis afull-system simulatoror virtual platform used to run unchanged production binaries of the target hardware. Simics was originally developed by theSwedish Institute of Computer Science(SICS), and then spun off toVirtutechfor commercial development in 1998. Virtutech was acquired byIntelin 2010. Currently, Simics is provided byIntelin a public release[1]and sold commercially byWind River Systems, which was in the past a subsidiary of Intel.
Simics contains bothinstruction set simulatorsand hardware models, and is or has been used to simulate systems such asAlpha,ARM(32- and 64-bit),IA-64,MIPS(32- and 64-bit),MSP430,PowerPC(32-and64-bit),RISC-V(32-and64-bit),SPARC-V8 and V9, andx86andx86-64CPUs.
Many different operating systems have been run on various simulated virtual platforms, includingLinux,MS-DOS,Windows,VxWorks,OSE,Solaris,FreeBSD,QNX,RTEMS,UEFI, andZephyr.
TheNetBSDAMD64 port was initially developed using Simics before the public release of the chip.[2]The purpose of simulation in Simics is often to develop software for a particular type of hardware without requiring access to that precise hardware, using Simics as avirtual platform. This can applied both to pre-release and pre-silicon software development for future hardware, as well as for existing hardware. Intel uses Simics to provide its ecosystem with access to future platform months or years ahead of the hardware launch.[3]
The current version of Simics is 6 which was released publicly in 2019.[4][5]Simics runs on 64-bit x86-64 machines runningMicrosoft WindowsandLinux(32-bit support was dropped with the release of Simics 5, since 64-bit provides significant performance advantages and is universally available on current hardware). The previous version, Simics 5, was released in 2015.[6]
Simics has the ability to execute a system in forward and reverse direction.[7]Reverse debuggingcan illuminate how an exceptional condition orbugoccurred. When executing an OS such asLinuxin reverse using Simics, previously deleted files reappear when the deletion point is passed in reverse and scrolling and other graphical display and console updates occur backwards as well.
Simics is built for high performance execution of full-system models, and uses bothbinary translationandhardware-assisted virtualizationto increase simulation speed. It is natively multithreaded and can simulate multiple target (or guest) processors and boards using multiple host threads. It has been used to run simulations containing hundreds of target processors.
Thisemulation-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Simics
|
FreeOTPis afree and open-sourceauthenticatorbyRedHat. It implementsmulti-factor authenticationusingHOTPandTOTP.Tokenscan be added by scanning aQR codeor by manually entering the token configuration. It is licensed under theApache 2.0 license, and supportsAndroidandiOS.[4][5][6]
This mobile software article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/FreeOTP
|
In statistics,Grubbs's testor theGrubbs test(named afterFrank E. Grubbs, who published the test in 1950[1]), also known as themaximum normalizedresidualtestorextreme studentized deviate test, is atestused to detectoutliersin aunivariatedata setassumed to come from anormally distributedpopulation.
Grubbs's test is based on the assumption ofnormality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs test.[2]
Grubbs's test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers.[3]
Grubbs's test is defined for the followinghypotheses:
The Grubbstest statisticis defined as
withY¯{\displaystyle {\overline {Y}}}ands{\displaystyle s}denoting thesample meanandstandard deviation, respectively. The Grubbs test statistic is the largestabsolute deviationfrom the sample mean in units of the sample standard deviation.
This is thetwo-sided test, for which the hypothesis of no outliers is rejected atsignificance levelα if
withtα/(2N),N−2denoting the uppercritical valueof thet-distributionwithN− 2degrees of freedomand a significance level of α/(2N).
Grubbs's test can also be defined as a one-sided test, replacing α/(2N) with α/N. To test whether the minimum value is an outlier, the test statistic is
withYmindenoting the minimum value. To test whether the maximum value is an outlier, the test statistic is
withYmaxdenoting the maximum value.
Severalgraphical techniquescan be used to detect outliers. A simplerun sequence plot, abox plot, or ahistogramshould show any obviously outlying points. Anormal probability plotmay also be useful.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Grubbs%27s_test
|
Asoftware design description(a.k.a.software design documentorSDD; justdesign document; alsoSoftware Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders.[1]An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
The SDD usually contains the following information:
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
IEEE 1016-2009, titledIEEE Standard for Information Technology—Systems Design—Software Design Descriptions,[2]is anIEEEstandard that specifies "the required information content and organization" for an SDD.[3]IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."[4]
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled afterIEEE Std 1471-2000,Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts ofview, viewpoint, stakeholder, and concernfrom architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016,Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:[5]
In addition, users of the standard are not limited to these viewpoints but may define their own.[6]
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.[7]
|
https://en.wikipedia.org/wiki/Design_document
|
TheInternational Conference on Automated Planning and Scheduling(ICAPS) is a leading internationalacademic conferenceinautomated planning and schedulingheld annually for researchers and practitioners in planning and scheduling.[2][3][4]ICAPS is supported by theNational Science Foundation, the journalArtificial Intelligence, and other supporters.[5]
ICAPS conducts the International Planning Competition (IPC), a competition scheduled every few years that empirically evaluates state-of-the-art planning systems on a collection of benchmark problems.[6]ThePlanning Domain Definition Language(PDDL) was developed mainly to make the 1998/2000 International Planning Competition possible, and then evolved with each competition. PDDL is an attempt to standardize Artificial Intelligence (AI) planning languages.[7][8]PDDL was first developed byDrew McDermottand his colleagues in 1998, inspired bySTRIPS,ADL, and other sources.
The ICAPS conferences began in 2003 as a merge of two bi-annual conferences, the International Conference on Artificial Intelligence Planning and Scheduling (AIPS) and the European Conference on Planning (ECP).[1]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
This article about a computer conference is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/International_Conference_on_Automated_Planning_and_Scheduling
|
Least slack time(LST)schedulingis analgorithmfordynamic priority scheduling. It assigns priorities to processes based on theirslack time. Slack time is the amount of time left after a job if the job was started now. This algorithm is also known asleast laxity first. Its most common use is inembedded systems, especially those with multiple processors. It imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do nothave an affinity toa certain processor. This is what lends it a suitability to embedded systems.
This scheduling algorithm first selects those processes that have the smallest "slack time". Slack time is defined as the temporal difference between the deadline, the ready time and the run time.
More formally, theslack times{\displaystyle s}for a process is defined as:
s=(d−t)−c′{\displaystyle s=(d-t)-c'}
whered{\displaystyle d}is the process deadline,t{\displaystyle t}is the real time since the cycle start, andc′{\displaystyle c'}is the remaining computation time.
In realtime scheduling algorithms for periodic jobs, an acceptance test is needed before accepting a sporadic job with a hard deadline. One of the simplest acceptance tests for a sporadic job is calculating the amount of slack time between the release time and deadline of the job.
LST scheduling is most useful in systems comprising mainly aperiodic tasks, because no prior assumptions are made on the events' rate of occurrence. The main weakness of LST is that it does not look ahead, and works only on the current system state. Thus, during a brief overload of system resources, LST can be suboptimal. It will also be suboptimal when used with uninterruptible processes. However, like theearliest deadline first, and unlikerate monotonic scheduling, this algorithm can be used for processor utilization up to 100%.
|
https://en.wikipedia.org/wiki/Least_slack_time_scheduling
|
Inintelligent networks(IN) and cellular networks,service layeris a conceptual layer within a network service provider architecture. It aims at providingmiddlewarethat serves third-partyvalue-added servicesand applications at a higherapplication layer. The service layer providescapability serversowned by a telecommunication network service provider, accessed through open and secureApplication Programming Interfaces(APIs) by application layer servers owned by third-partycontent providers. The service layer also provides an interface to core networks at a lower resource layer.[1]The lower layers may also be namedcontrol layerandtransport layer(the transport layer is also referred to as theaccess layerin some architectures).[citation needed]
The concept of service layer is used in contexts such asIntelligent networks(IN),WAP,3GandIP Multimedia Subsystem(IMS). It is defined in the3GPPOpen Services Architecture(OSA) model, which reused the idea of theParlayAPI for third-party servers.
In software design, for exampleService-oriented architecture, the concept of service layer has a different meaning.
The service layer of anIMSarchitecture provides multimedia services to the overall IMS network. This layer contains network elements which connect to the Serving-CSCF (Call Session Control Function) using the IP multimedia Subsystem Service Control Interface (ISC).[2]The ISC interface uses theSIPsignalling protocol.
The network elements contained within the service layer are generically referred to as 'service platforms' however the 3GPP specification (3GPP TS 23.228 V8.7.0) defines several types of service platforms:
The SIP Application Server (AS) performs the same function as aTelephony Application Serverin a pre-IMS network, however it is specifically tailored to support the SIP signalling protocol for use in an IMS network.
An OSA Service Capability Server acts as a secure gateway between the IMS network and an application which runs upon theOpen Services Architecture(this is typically aSIPtoParlaygateway)
The IM-SSF (IP Multimedia Service Switching Function) acts as a gateway between the IMS network and application servers using other telecommunication signalling standards such asINAPandCAMEL.
Inservice-oriented architecture(SOA), the service layer is the third layer in a five-abstraction-layer model. The model consists of Object layer, Component layer, Service layer, Process layer and Enterprise layer.[3]The service layer can be considered as a bridge between the higher and lower layers, and is characterized by a number of services that are carrying out individual business functions.
|
https://en.wikipedia.org/wiki/Service_layer
|
Infinance, agrowth stockis astockof a company that generates substantial and sustainable positivecash flowand whoserevenuesandearningsare expected to increase at a faster rate than the average company within the same industry.[1]A growth company typically has some sort ofcompetitive advantage(a new product, a breakthrough patent, overseas expansion) that allows it to fend off competitors. Growth stocks usually pay smallerdividends, as the companies typically reinvest mostretained earningsin capital-intensive projects.
Analysts computereturn on equity(ROE) by dividing a company's net income into averagecommon equity. To be classified as a growth stock, analysts generally expect companies to achieve a 15 percent or higher return on equity.[2]CAN SLIMis a method which identifies growth stocks and was created byWilliam O'Neila stock broker and publisher ofInvestor's Business Daily.[3]In academic finance, theFama–French three-factor modelrelies onbook-to-market ratios(B/M ratios) to identify growth vs. value stocks.[4]Some advisors suggest investing half the portfolio using the value approach and other half using the growth approach.[5]
The definition of a "growth stock" differs among some well-known investors. For example,Warren Buffettdoes not differentiate between value and growth investing. In his 1992 letter to shareholders, he stated that many analysts consider growth and value investing to be opposites which he characterized "fuzzy thinking."[6]Furthermore, Buffett cautions investors against overpaying for growth stocks, noting that growth projections are often overly optimistic. Instead, he prioritizes companies with a durable competitive advantage and a highreturn on capital, rather than focusing solely on revenue or earnings growth.[7]
Peter Lynchclassifies stocks into four categories: "Slow Growers," "Stalwarts," "Fast Growers," and "Turnarounds."[8]He is known for focusing on what he calls "Fast Growers" referring to companies that grow at rates of 20% or higher. However, like Buffett, Lynch also believes in not overpaying for stocks emphasizing that investors should use their "edge" to find companies with high earnings potential that are not yet overvalued.[9]He recommends investing in companies with P/E ratios equal to or lower than their growth rates and suggests holding these investments for three to five years.[8]He is often credited for popularizing thePEG ratioto analyze growth stocks.[10]
This finance-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Growth_stock
|
Theinode pointer structureis a structure adopted by theinodeof a file in theVersion 6 Unixfile system,Version 7 Unixfile system, andUnix File System(UFS) to list the addresses of a file'sdata blocks. It is also adopted by many related file systems, including theext3file system, popular with Linux users.
In the file system used inVersion 6 Unix, an inode contains eight pointers:[1]
In the file system used inVersion 7 Unix, an inode contains thirteen pointers:[2]
In theUnix file system, an inode contains fifteen pointers:[3]
The levels of indirection indicate the number of pointer that must be followed before reaching actual file data.
The structure is partially illustrated in the diagram accompanying this article. The structure allows for inodes to describe very large files in file systems with a fixed logical block size. Central to the mechanism is that blocks of addresses (also calledindirect blocks) are only allocated as needed. For example, in theUnix file system, a 12-block file would be described using just the inode because its blocks fit into the number of direct pointers available. However, a 13-block file needs an indirect block to contain the thirteenth address.
The inode pointer structure not only allows for files to easily be allocated to non-contiguous blocks, it also allows the data at a particular location inside a file to be easily located. This is possible because the logical block size is fixed. For example, if each block is 8 kB, file data at 112 kB to 120 kB would be pointed to by the third pointer of the first indirect block (assuming twelve direct pointers in the inode pointer structure).
Unlike inodes, which are fixed in number and allocated in a special part of the file system, the indirect blocks may be of any number and are allocated in the same part of the file system as data blocks. The number of pointers in the indirect blocks is dependent on the block size and size of block pointers. Example: with a 512-byte block size, and 4-byte block pointers, each indirect block can consist of 128 (512 / 4) pointers.
|
https://en.wikipedia.org/wiki/Inode_pointer_structure
|
NNI(Neural Network Intelligence) is afree and open-sourceAutoMLtoolkit developed byMicrosoft.[3][4]It is used to automatefeature engineering,model compression,neural architecture search, andhyper-parameter tuning.[5][6]
The source code is licensed underMIT Licenseand available onGitHub.[7]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Neural_Network_Intelligence
|
Waikato Environment for Knowledge Analysis(Weka) is a collection of machine learning and data analysisfree softwarelicensed under theGNU General Public License. It was developed at theUniversity of Waikato,New Zealandand is the companion software to the book "Data Mining: Practical Machine Learning Tools and Techniques".[1]
Weka contains a collection of visualization tools and algorithms fordata analysisandpredictive modeling, together with graphical user interfaces for easy access to these functions.[1]The original non-Java version of Weka was aTcl/Tkfront-end to (mostly third-party) modeling algorithms implemented in other programming languages, plusdata preprocessingutilities inC, and amakefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains,[2][3]but the more recent fullyJava-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include:
Weka supports several standarddata miningtasks, more specifically, data preprocessing,clustering,classification,regression,visualization, andfeature selection. Input to Weka is expected to be formatted according the Attribute-Relational File Format and with the filename bearing the .arff extension. All of Weka's techniques are predicated on the assumption that the data is available as one flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access toSQLdatabasesusingJava Database Connectivityand can process the result returned by a database query. Weka provides access todeep learningwithDeeplearning4j.[4]It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for processing using Weka.[5]Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling.
In version 3.7.2, a package manager was added to allow the easier installation of extension packages.[6]Some functionality that used to be included with Weka prior to this version has since been moved into such extension packages, but this change also makes it easier for others to contribute extensions to Weka and to maintain the software, as this modular architecture allows independent updates of the Weka core and individual extensions.
|
https://en.wikipedia.org/wiki/Weka_(machine_learning)
|
ISO11784andISO 11785areinternational standardsthat regulate theradio-frequency identification(RFID) of animals, which is usually accomplished by implanting, introducing or attaching a transponder containing a microchip to an animal.
RF identification of animals requires that the bits transmitted by atransponderare interpretable by atransceiver. Usually, thebit streamcontains data bits, defining the identification code and a number of bits to ensure correct reception of the data bits. ISO 11784 specifies the structure of the identification code. ISO 11785 specifies how a transponder is activated and how the stored information is transferred to a transceiver (the characteristics of the transmission protocols between transponder and transceiver)
These standards are updated and expanded inISO 14223which regulates "advanced"transponders for animals, andISO 24631which regulates testing procedures for conformance with ISO 11784 & 11785 as well as performance.
The technical concept of animal identification described is based on the principle ofradio-frequency identification(RFID). ISO 11785 is applicable in connection with ISO 11784 which describes the structure and the information content of the codes stored in the transponder.
The International Organization for Standardization (ISO) draws attention to the fact that compliance with clause 6 and Annex A of this International Standard may involve the use of patents concerning methods of transmission.
Thecarrier frequencyfor animal identification is 134.2 kHz.
There are two ISO approved protocols in use to communicate between tag and reader:
FDX-A which uses the 125kHz frequency and a 10 bit code is not ISO compliant.
In DBP a1is encoded as00or11and a0is encoded as01or10, such that there is at least one transition per bit (so11is encoded as0011and not as0000or1111)
ISO 11784:1996 Radio-frequency identification of animals - Code structure
The first three digits of the ID are the manufacturer code.
With half-duplex, the tag must store sufficient energy when the receiver's activating field is turned on to allow it to transmit when the activating field is switched off. This makes the receiver simpler, as it is not necessary to pick up the weak signal from the tag among the strong activating field. The disadvantage is that the HDX tag can not transmit when the activating field is turned on.
Telegram layout:
With full duplex, the tag can transmit immediately when in the presence of the receiver's activating field. The advantage is that the FDX tag can then transmit continuously and can therefore be read more quickly and more often.
Telegram layout:
In FDX (at least), after the 11 startbits, a framing bit ('1') is sent after every 8 data bits.
Compliance with the standards may require use of techniques which are covered by (or claimed to be covered by) certain patents.
ISO takes no position concerning the evidence, validity and scope of these patent rights.
Some patent holder has assured ISO that they will not exert their patent rights concerning FDX B technology.[citation needed]Other patent holders have assured ISO that they are willing to negotiate licenses underreasonable and non-discriminatoryterms and conditions with applicants through the world. In this respect, the statement of the holders of these patent rights are registered with ISO.
Attention is moreover drawn to the possibility that some of the elements of this International Standard may be the subject of patent rights other than those identified above. ISO shall not be held responsible for identifying any or all such patent rights. In that connection, additional correspondences were received from two other companies not willing to forward pertinent declaration in accordance with the current ISO Directives.
|
https://en.wikipedia.org/wiki/ISO_11784_and_ISO_11785
|
Incomputer science,bounds-checking eliminationis acompiler optimizationuseful inprogramming languagesorruntime systemsthat enforcebounds checking, the practice of checking every index into anarrayto verify that the index is within the defined valid range of indexes.[1]Its goal is to detect which of these indexing operations do not need to be validated atruntime, and eliminating those checks.
One common example is accessing an array element, modifying it, and storing the modified value in the same array at the same location. Normally, this example would result in a bounds check when the element is read from the array and a second bounds check when the modified element is stored using the same array index. Bounds-checking elimination could eliminate the second check if the compiler or runtime can determine that neither the array size nor the index could change between the two array operations. Another example occurs when a programmerloops overthe elements of the array, and the loop condition guarantees that the index is within the bounds of the array. It may be difficult to detect that the programmer's manual check renders the automatic check redundant. However, it may still be possible for the compiler or runtime to perform proper bounds-checking elimination in this case.
One technique for bounds-checking elimination is to use a typedstatic single assignment formrepresentation and for each array to create a new type representing a safe index for that particular array. The first use of a value as an array index results in a runtime type cast (and appropriate check), but subsequently the safe index value can be used without a type cast, without sacrificing correctness or safety.
Just-in-time compiledlanguages such asJavaandC#often check indexes at runtime before accessingarrays. Some just-in-time compilers such asHotSpotare able to eliminate some of these checks if they discover that the index is always within the correct range, or if an earlier check would have already thrown an exception.[2][3]
|
https://en.wikipedia.org/wiki/Bounds-checking_elimination
|
Compound-term processing,ininformation-retrieval, is search result matching on the basis ofcompound terms. Compound terms are built by combining two or more simple terms; for example, "triple" is a single word term, but "triple heart bypass" is a compound term.
Compound-term processing is a new approach to an old problem: how can one improve the relevance of search results while maintaining ease of use? Using this technique, a search forsurvival rates following a triple heart bypass in elderly peoplewill locate documents about this topic even if this precise phrase is not contained in any document. This can be performed by aconcept search, which itself uses compound-term processing. This will extract the key concepts automatically (in this case "survival rates", "triple heart bypass" and "elderly people") and use these concepts to select the most relevant documents.
In August 2003,Concept Searching Limitedintroduced the idea of using statistical compound-term processing.[1]
CLAMOUR is a European collaborative project which aims to find a better way to classify when collecting and disseminating industrial information and statistics. CLAMOUR appears to use a linguistic approach, rather than one based onstatistical modelling.[2]
Techniques for probabilistic weighting of single word terms date back to at least 1976 in the landmark publication byStephen E. RobertsonandKaren Spärck Jones.[3]Robertson stated that the assumption of word independence is not justified and exists as a matter of mathematical convenience. His objection to the term independence is not a new idea, dating back to at least 1964 when H. H. Williams stated that "[t]he assumption of independence of words in a document is usually made as a matter of mathematical convenience".[4]
In 2004, Anna Lynn Patterson filed patents on "phrase-based searching in an information retrieval system"[5]to whichGooglesubsequently acquired the rights.[6]
Statistical compound-term processing is more adaptable than the process described by Patterson. Her process is targeted at searching theWorld Wide Webwhere an extensive statistical knowledge of common searches can be used to identify candidate phrases. Statistical compound term processing is more suited toenterprise searchapplications where sucha prioriknowledge is not available.
Statistical compound-term processing is also more adaptable than the linguistic approach taken by the CLAMOUR project, which must consider the syntactic properties of the terms (i.e. part of speech, gender, number, etc.) and their combinations. CLAMOUR is highly language-dependent, whereas the statistical approach is language-independent.
Compound-term processing allows information-retrieval applications, such assearch engines, to perform their matching on the basis of multi-word concepts, rather than on single words in isolation which can be highly ambiguous.
Early search engines looked for documents containing the words entered by the user into the search box . These are known askeyword searchengines.Boolean searchengines add a degree of sophistication by allowing the user to specify additional requirements. For example, "Tiger NEAR Woods AND (golf OR golfing) NOT Volkswagen" uses the operators "NEAR", "AND", "OR" and "NOT" to specify that these words must follow certain requirements. Aphrase searchis simpler to use, but requires that the exact phrase specified appear in the results.
|
https://en.wikipedia.org/wiki/Compound-term_processing
|
Wabun code(和文モールス符号,wabun mōrusu fugō, Morse code for Japanese text)is a form ofMorse codeused to sendJapanese languageinkanacharacters.[1]UnlikeInternational Morse Code, which represents letters of theLatin script, in Wabun each symbol represents a Japanesekana.[2]For this reason,Wabun codeis also sometimes calledKana code.[3]
When Wabun code is intermixed with International Morse code, theprosignDO(▄▄▄ ▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄) is used to announce the beginning of Wabun, and the prosignSN(▄ ▄ ▄ ▄▄▄ ▄) is used to announce the return to International Code.
Kana inIrohaorder.
|
https://en.wikipedia.org/wiki/Wabun_code
|
TheCity Repair Projectis a501(c)(3)non-profit organization based inPortland, Oregon. Its focus is education and activism for community building. The organizational motto is "The City Repair Project is group of citizen activists creating public gathering places and helping others to creatively transform the places where they live."[2]
City Repair is an organization primarily run by volunteers. A board of directors oversees the project's long-term vision, and a council maintains its daily operations. Both the board of directors and council meet monthly. City Repair's work focuses on localization andplacemaking. The City Repair Project maintains an office in Portland.
The City Repair Project was founded in 1996 by a small group of neighbors interested insustainabilityand neighborhood activism.[3]The first City Repair action was an intersection repair at Share-It Square at SE 9th Ave and SE Sherrett Street. An intersection repair is a place where two streets crossed that is painted by the members of that neighborhood. The street is closed down during the painting. The first intersection repair that happened was at Share-it Square. Other intersection repairs include Sunnyside Piazza.[4][5]
City Repair hosts two events annually, Portland'sEarth Daycelebration and the Village Building Convergence.[6]
Past projects include the T-Horse, a small pick-up truck converted into a mobile tea house. The T-Horse was driven to neighborhood sites and events around Portland and served freechaiand pie.[citation needed]
The organization has inspired groups around the United States to start their own City Repair Projects. Unaffiliated City Repairs exist in California, Washington, Minnesota, and other places.
TheVillage Building Convergence(VBC) is an annual 10-day event held every May inPortland, Oregon, United States. The event is coordinated by the City Repair Project and consists of a series of workshops incorporatingnatural buildingandpermaculturedesign at multiple sites around the city. Many of the workshops center on "intersection repairs" which aim to transform street intersections into public gathering spaces.
In 1996, neighbors in theSellwood neighborhoodof Portland at the intersection of 8th and Sherrett created a tea stand, children's playhouse and community library on the corner and renamed it "Share-It Square".[7]Community organizers founded the City Repair Project that same year, seeking to share their vision with the community. In January 2000, thePortland City Councilpassed ordinance #172207, an "Intersection Repair" ordinance, allowing neighborhoods to develop public gathering places in certain street intersections.[8]
The first Village Building Convergence took place in May 2002, then called the Natural Building Convergence.
During its history, the VBC has coordinated the creation of over 72 natural building and permaculture sites in Portland, including information kiosks, painted intersections,cobbenches, and astrawbale houseatDignity Village. The sites are primarily located in the southeast quadrant of Portland. Natural builders from around the world have coordinated the activities at many of the construction sites at the Village Building Convergence. Most of the labor taking place at the sites is done by volunteers.
The VBC hosts a series of workshops, many of which are free to the public. Topics of the workshops are usually related tosustainabilityandnatural building. Past workshops have includedaikidolessons, outdoor mushroom cultivation,bioswalecreation, andnonviolent communication.[9]
The VBC also hosts speakers and entertainment during the evenings of its convergences. Presentations for the 2007 convergence were made atDisjectabyStarhawk,Michael Lerner, andPaul Stamets.[10]Prior years' presentations have been given byMalik Rahim,Toby Hemenway, and Judy Bluehorse.
|
https://en.wikipedia.org/wiki/City_repair_project
|
Inoptics, theoptical sine theoremstates that the products of the index, height, andsineof the slope angle of a ray in object space and its corresponding ray inimage spaceare equal. That is:
Thisoptics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Optical_sine_theorem
|
Inpropositional logicandBoolean algebra, there is aduality betweenconjunctionanddisjunction,[1][2][3]also called theduality principle.[4][5][6]It is the most widely known example of duality in logic.[1]The duality consists in thesemetalogicaltheorems:
The connectives may be defined in terms of each other as follows:
Since theDisjunctive Normal Form Theoremshows that the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\vee ,\neg \}}isfunctionally complete, these results show that the sets of connectives{∧,¬}{\displaystyle \{\land ,\neg \}}and{∨,¬}{\displaystyle \{\vee ,\neg \}}are themselves functionally complete as well.
De Morgan's lawsalso follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it.[1]
Thedualof a sentence is what you get by swapping all occurrences of∨{\textstyle \vee }and∧{\textstyle \land }, while also negating all propositional constants. For example, the dual of(A∧B∨C){\textstyle (A\land B\vee C)}would be(¬A∨¬B∧¬C){\textstyle (\neg A\vee \neg B\land \neg C)}. The dual of a formulaφ{\textstyle \varphi }is notated asφ∗{\textstyle \varphi ^{*}}. TheDuality Principlestates that in classical propositional logic, any sentence is equivalent to the negation of its dual.[4][7]
Assumeφ⊨ψ{\displaystyle \varphi \models \psi }. Thenφ¯⊨ψ¯{\displaystyle {\overline {\varphi }}\models {\overline {\psi }}}by uniform substitution of¬Pi{\displaystyle \neg P_{i}}forPi{\displaystyle P_{i}}. Hence,¬ψ⊨¬φ{\displaystyle \neg \psi \models \neg \varphi },by contraposition; so finally,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}, by the property thatφD{\displaystyle \varphi ^{D}}⟚¬φ¯{\displaystyle \neg {\overline {\varphi }}}, which was just proved above.[7]And sinceφDD=φ{\displaystyle \varphi ^{DD}=\varphi }, it is also true thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}.[7]And it follows, as a corollary, that ifφ⊨¬ψ{\displaystyle \varphi \models \neg \psi }, thenφD⊨¬ψD{\displaystyle \varphi ^{D}\models \neg \psi ^{D}}.[7]
For a formulaφ{\displaystyle \varphi }indisjunctive normal form, the formulaφ¯D{\displaystyle {\overline {\varphi }}^{D}}will be inconjunctive normal form, and given the result that§ Negation is semantically equivalent to dual, it will be semantically equivalent to¬φ{\displaystyle \neg \varphi }.[8][9]This provides a procedure for converting between conjunctive normal form and disjunctive normal form.[10]Since theDisjunctive Normal Form Theoremshows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual.[9]
[11][12]
|
https://en.wikipedia.org/wiki/Conjunction/disjunction_duality
|
Manycore processorsare special kinds ofmulti-core processorsdesigned for a high degree ofparallel processing, containing numerous simpler, independentprocessor cores(from a few tens of cores to thousands or more). Manycore processors are used extensively inembedded computersandhigh-performance computing.
Manycore processors are distinct frommulti-core processorsin being optimized from the outset for a higher degree ofexplicit parallelism, and for higher throughput (or lower power consumption) at the expense of latency and lowersingle-thread performance.
The broader category ofmulti-core processors, by contrast, are usually designed to efficiently runbothparallelandserial code, and therefore place more emphasis on high single-thread performance (e.g. devoting more silicon toout-of-order execution, deeperpipelines, moresuperscalarexecution units, and larger, more general caches), andshared memory. These techniques devote runtime resources toward figuring out implicit parallelism in a single thread. They are used in systems where they have evolved continuously (with backward compatibility) from single core processors. They usually have a 'few' cores (e.g. 2, 4, 8) and may be complemented by a manycoreaccelerator(such as aGPU) in aheterogeneous system.
Cache coherencyis an issue limiting the scaling of multicore processors. Manycore processors may bypass this with methods such asmessage passing,[1]scratchpad memory,DMA,[2]partitioned global address space,[3]or read-only/non-coherent caches. A manycore processor using anetwork on a chipand local memories gives software the opportunity to explicitly optimise the spatial layout of tasks (e.g. as seen in tooling developed forTrueNorth).[4]
Manycore processors may have more in common (conceptually) with technologies originating inhigh-performance computingsuch asclustersandvector processors.[5]
GPUs may be considered a form of manycore processor having multipleshader processing units, and only being suitable for highly parallel code (high throughput, but extremely poor single thread performance).
A number of computers built from multicore processors have one million or more individual CPU cores. Examples include:
Quite a fewsupercomputershave over 5 million CPU cores. When there are also coprocessors, e.g. GPUs used with, then those cores are not listed in the core-count, then quite a few more computers would hit those targets.
|
https://en.wikipedia.org/wiki/Manycore_processor
|
Instatistics, thecorrelation ratiois a measure of the curvilinear relationship between thestatistical dispersionwithin individual categories and the dispersion across the whole population or sample. The measure is defined as theratioof twostandard deviationsrepresenting these types of variation. The context here is the same as that of theintraclass correlation coefficient, whose value is the square of the correlation ratio.
Suppose each observation isyxiwherexindicates the category that observation is in andiis the label of the particular observation. Letnxbe the number of observations in categoryxand
wherey¯x{\displaystyle {\overline {y}}_{x}}is the mean of the categoryxandy¯{\displaystyle {\overline {y}}}is the mean of the whole population. The correlation ratio η (eta) is defined as to satisfy
which can be written as
i.e. the weighted variance of the category means divided by the variance of all samples.
If the relationship between values ofx{\displaystyle x}and values ofy¯x{\displaystyle {\overline {y}}_{x}}is linear (which is certainly true when there are only two possibilities forx) this will give the same result as the square of Pearson'scorrelation coefficient; otherwise the correlation ratio will be larger in magnitude. It can therefore be used for judging non-linear relationships.
The correlation ratioη{\displaystyle \eta }takes values between 0 and 1. The limitη=0{\displaystyle \eta =0}represents the special case of no dispersion among the means of the different categories, whileη=1{\displaystyle \eta =1}refers to no dispersion within the respective categories.η{\displaystyle \eta }is undefined when all data points of the complete population take the same value.
Suppose there is a distribution of test scores in three topics (categories):
Then the subject averages are 36, 33 and 78, with an overall average of 52.
The sums of squares of the differences from the subject averages are 1952 for Algebra, 308 for Geometry and 600 for Statistics, adding to 2860. The overall sum of squares of the differences from the overall average is 9640. The difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average:
This gives
suggesting that most of the overall dispersion is a result of differences between topics, rather than within topics. Taking the square root gives
Forη=1{\displaystyle \eta =1}the overall sample dispersion is purely due to dispersion among the categories and not at all due to dispersion within the individual categories. For quick comprehension simply imagine all Algebra, Geometry, and Statistics scores being the same respectively, e.g. 5 times 36, 4 times 33, 6 times 78.
The limitη=0{\displaystyle \eta =0}refers to the case without dispersion among the categories contributing to the overall dispersion. The trivial requirement for this extreme is that all category means are the same.
The correlation ratio was introduced byKarl Pearsonas part ofanalysis of variance.Ronald Fishercommented:
"As a descriptive statistic the utility of the correlation ratio is extremely limited. It will be noticed that the number ofdegrees of freedomin the numerator ofη2{\displaystyle \eta ^{2}}depends on the number of the arrays"[1]
to whichEgon Pearson(Karl's son) responded by saying
"Again, a long-established method such as the use of the correlation ratio [§45 The "Correlation Ratio" η] is passed over in a few words without adequate description, which is perhaps hardly fair to the student who is given no opportunity of judging its scope for himself."[2]
|
https://en.wikipedia.org/wiki/Correlation_ratio
|
Aletter bankis a relative of theanagramwhere all the letters of one word (the "bank") can be used as many times as desired (minimum of once each) to make a new word or phrase. For example, IMPS is a bank of MISSISSIPPI and SPROUT is a bank ofSUPPORT OUR TROOPS. As a convention, the bank should have no repeat letters within itself.
The term was coined byWill Shortz, whose first letter bank (BLUME -> BUMBLEBEE) appeared in his 1979 book, "Brain Games". In 1980, Shortz introduced letter banks to theNational Puzzlers' League(of which he is the historian), in the form of a contest puzzle. In 1981, the letter bank was announced an official puzzle type in the NPL’s magazine "The Enigma".[1]
Letter banks are the basis for the word gameAlpha Blitz.[citation needed]
Thisgame-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Letter_bank
|
Inlinear algebra, the order-rKrylov subspacegenerated by ann-by-nmatrixAand a vectorbof dimensionnis thelinear subspacespannedby theimagesofbunder the firstrpowers ofA(starting fromA0=I{\displaystyle A^{0}=I}), that is,[1][2]
The concept is named after Russian applied mathematician and naval engineerAlexei Krylov, who published a paper about the concept in 1931.[3]
Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensionallinear algebra problems.[2]Manylinear dynamical systemtests incontrol theory, especially those related tocontrollabilityandobservability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of theGramiansassociated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.[4]
Moderniterative methodssuch asArnoldi iterationcan be used for finding one (or a few) eigenvalues of largesparse matricesor solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vectorb{\displaystyle b}, one computesAb{\displaystyle Ab}, then one multiplies that vector byA{\displaystyle A}to findA2b{\displaystyle A^{2}b}and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation ofA{\displaystyle A}, giving rise toMatrix-free methods.
Because the vectors usually soon become almostlinearly dependentdue to the properties ofpower iteration, methods relying on Krylov subspace frequently involve someorthogonalizationscheme, such asLanczos iterationforHermitian matricesorArnoldi iterationfor more general matrices.
The best known Krylov subspace methods are theConjugate gradient,IDR(s)(Induced dimension reduction),GMRES(generalized minimum residual),BiCGSTAB(biconjugate gradient stabilized),QMR(quasi minimal residual),TFQMR(transpose-free QMR) andMINRES(minimal residual method).
|
https://en.wikipedia.org/wiki/Krylov_subspace
|
Empiricalmethods
Prescriptiveand policy
Alimited dependent variableis a variable whose range of
possible values is "restricted in some important way."[1]Ineconometrics, the term is often used when
estimation of the relationship between thelimiteddependent variableof interest and other variables requires methods that take this
restriction into account. For example, this may arise when the variable
of interest is constrained to lie between zero and one, as in
the case of aprobability, or is constrained to be positive,
as in the case of wages or hours worked.
Limited dependent variablemodels include:[2]
ThisEconometrics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Limited_dependent_variable
|
Apersonal information manager(often referred to as aPIM toolor, more simply, aPIM) is a type of application software that functions as a personal organizer. The acronymPIMis now, more commonly, used in reference to personal information management as a field of study.[1]As an information management tool, a PIM tool's purpose is to facilitate the recording, tracking, and management of certain types of "personal information".
Personal information can include any of the following:[2]
Some PIM/PDMsoftware products are capable of synchronizing data over acomputer network, includingmobile ad hoc networks(MANETs). This feature typically stores the personal data oncloud drivesallowing for continuous concurrent data updates/access, on the user's computers, includingdesktop computers,laptopcomputers, and mobile devices, such apersonal digital assistantsorsmartphones.)[3]
Prior to the introduction of the term "Personal digital assistant" ("PDA") by Apple in 1992, handheld personal organizers such as thePsion Organiserand theSharp Wizardwere also referred to as "PIMs".[4][5]
The time management and communications functions of PIMs largely migrated from PDAs to smartphones, with Apple, RIM (Research In Motion, nowBlackBerry), and others all manufacturing smartphones that offer most of the functions of earlier PDAs.
|
https://en.wikipedia.org/wiki/Personal_information_manager
|
Indeep learning,pruningis the practice of removingparametersfrom an existingartificial neural network.[1]The goal of this process is to reduce the size (parameter count) of the neural network (and therefore thecomputational resourcesrequired to run it) whilst maintaining accuracy. This can be compared to the biological process ofsynaptic pruningwhich takes place inmammalianbrains during development.[2]
A basic algorithm for pruning is as follows:[3][4]
Most work on neural network pruning focuses on removing weights, namely, setting their values to zero. Early work suggested to also change the values of non-pruned weights.[5]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network)
|
LinOTPis Linux-based software to manage authentication devices fortwo-factor authenticationwithone time passwords.
It is implemented as a web service based on the python frameworkPylons. Thus it requires a web server to
run in.
LinOTP is mainly developed by the German company KeyIdentity GmbH. Its core components are licensed under theAffero General Public License.
It is an open source authentication server certified[2]by theOATH initiative for open authenticationfor its 2.4 version.
As a web service, LinOTP provides aREST-like web API.[3]All functions can be accessed via Pylons controllers. Responses are returned as aJSONobject.
LinOTP is designed in a modular way, enabling user store modules and token modules. Thus, it is capable of supporting a wide range of different tokens.[4]
linotp
|
https://en.wikipedia.org/wiki/LinOTP
|
Inprobability theoryandstatistics, theZipf–Mandelbrot lawis adiscrete probability distribution. Also known as thePareto–Zipf law, it is apower-lawdistribution onranked data, named after thelinguistGeorge Kingsley Zipf, who suggested a simpler distribution calledZipf's law, and the mathematicianBenoit Mandelbrot, who subsequently generalized it.
Theprobability mass functionis given by
whereHN,q,s{\displaystyle H_{N,q,s}}is given by
which may be thought of as a generalization of aharmonic number. In the formula,k{\displaystyle k}is the rank of the data, andq{\displaystyle q}ands{\displaystyle s}are parameters of the distribution. In the limit asN{\displaystyle N}approaches infinity, this becomes theHurwitz zeta functionζ(s,q){\displaystyle \zeta (s,q)}. For finiteN{\displaystyle N}andq=0{\displaystyle q=0}the Zipf–Mandelbrot law becomesZipf's law. For infiniteN{\displaystyle N}andq=0{\displaystyle q=0}it becomes azeta distribution.
The distribution of words ranked by theirfrequencyin a randomtext corpusis approximated by apower-lawdistribution, known asZipf's law.
If one plots the frequency rank of words contained in a moderately sized corpus of text data versus the number of occurrences or actual frequencies, one obtains apower-lawdistribution, withexponentclose to one (but see Powers, 1998 and Gelbukh & Sidorov, 2001). Zipf's law implicitly assumes a fixed vocabulary size, but theHarmonic serieswiths= 1 does not converge, while the Zipf–Mandelbrot generalization withs> 1 does. Furthermore, there is evidence that the closed class of functional words that define a language obeys a Zipf–Mandelbrot distribution with different parameters from the open classes of contentive words that vary by topic, field and register.[1]
In ecological field studies, therelative abundance distribution(i.e. the graph of the number of species observed as a function of their abundance) is often found to conform to a Zipf–Mandelbrot law.[2]
Within music, many metrics of measuring "pleasing" music conform to Zipf–Mandelbrot distributions.[3]
|
https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law
|
Inmathematics,Sendov's conjecture, sometimes also calledIlieff's conjecture, concerns the relationship between the locations ofrootsandcritical pointsof apolynomial functionof acomplex variable. It is named afterBlagovest Sendov.
Theconjecturestates that for a polynomial
with all rootsr1, ...,rninside theclosed unit disk|z| ≤ 1, each of thenroots is at a distance no more than 1 from at least one critical point.
TheGauss–Lucas theoremsays that all of the critical points lie within theconvex hullof the roots. It follows that the critical points must be within the unit disk, since the roots are.
The conjecture has beenprovenforn< 9 by Brown-Xiang and fornsufficiently largebyTao.[1][2]
The conjecture was first proposed byBlagovest Sendovin 1959; he described the conjecture to his colleagueNikola Obreshkov. In 1967 the conjecture was misattributed[3]to Ljubomir Iliev byWalter Hayman.[4]In 1969 Meir and Sharma proved the conjecture for polynomials withn< 6. In 1991 Brown proved the conjecture forn< 7. Borcea extended the proof ton< 8 in 1996. Brown and Xiang[5]proved the conjecture forn< 9 in 1999.Terence Taoproved the conjecture for sufficiently largenin 2020.
|
https://en.wikipedia.org/wiki/Sendov%27s_conjecture
|
Askunked termis a word or phrase that becomes difficult to use because it isevolvingfrom one meaning to another, perhaps inconsistent or evenopposite, usage,[1]or that becomes difficult to use due to other controversy surrounding the term.[2]Puristsmay insist on the old usage, whiledescriptivistsmay be more open to newer usages. Readers may not know which sense is meant especially whenprescriptivistsinsist on a meaning that accords with interests that often conflict.[citation needed]
The term was coined by thelexicographerBryan A. GarnerinGarner's Modern American Usageand has since been adopted by some otherstyle guides.[2]
Garner recommends avoiding such terms if their use may distract readers from the intended meaning of a text.[3]
Some terms, such as "fulsome", may become skunked, and then eventually revert to their original meaning over time.[4]
|
https://en.wikipedia.org/wiki/Skunked_term
|
Inlinear algebra, aToeplitz matrixordiagonal-constant matrix, named afterOtto Toeplitz, is amatrixin which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:
Anyn×n{\displaystyle n\times n}matrixA{\displaystyle A}of the form
is aToeplitz matrix. If thei,j{\displaystyle i,j}element ofA{\displaystyle A}is denotedAi,j{\displaystyle A_{i,j}}then we have
A Toeplitz matrix is not necessarilysquare.
A matrix equation of the form
is called aToeplitz systemifA{\displaystyle A}is a Toeplitz matrix. IfA{\displaystyle A}is ann×n{\displaystyle n\times n}Toeplitz matrix, then the system has at most only2n−1{\displaystyle 2n-1}unique values, rather thann2{\displaystyle n^{2}}. We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case.
Toeplitz systems can be solved by algorithms such as theSchur algorithmor theLevinson algorithminO(n2){\displaystyle O(n^{2})}time.[1][2]Variants of the latter have been shown to be weakly stable (i.e. they exhibitnumerical stabilityforwell-conditionedlinear systems).[3]The algorithms can also be used to find thedeterminantof a Toeplitz matrix inO(n2){\displaystyle O(n^{2})}time.[4]
A Toeplitz matrix can also be decomposed (i.e. factored) inO(n2){\displaystyle O(n^{2})}time.[5]The Bareiss algorithm for anLU decompositionis stable.[6]An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant.
Theconvolutionoperation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution ofh{\displaystyle h}andx{\displaystyle x}can be formulated as:
This approach can be extended to computeautocorrelation,cross-correlation,moving averageetc.
A bi-infinite Toeplitz matrix (i.e. entries indexed byZ×Z{\displaystyle \mathbb {Z} \times \mathbb {Z} })A{\displaystyle A}induces alinear operatoronℓ2{\displaystyle \ell ^{2}}.
The induced operator isboundedif and only if the coefficients of the Toeplitz matrixA{\displaystyle A}are the Fourier coefficients of someessentially boundedfunctionf{\displaystyle f}.
In such cases,f{\displaystyle f}is called thesymbolof the Toeplitz matrixA{\displaystyle A}, and the spectral norm of the Toeplitz matrixA{\displaystyle A}coincides with theL∞{\displaystyle L^{\infty }}norm of its symbol. Theproofcan be found as Theorem 1.1 of Böttcher and Grudsky.[8]
|
https://en.wikipedia.org/wiki/Toeplitz_matrix
|
Gray-box testing(International English spelling:grey-box testing) is a combination ofwhite-box testingandblack-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.[1][2]
A black-box tester is unaware of the internal structure of the application to be tested, while a white-box tester has access to the internal structure of the application. A gray-box tester partially knows the internal structure, which includes access to the documentation of internal data structures as well as the algorithms used.[3]
Gray-box testers require both high-level and detailed documents describing the application, which they collect in order to define test cases.[4]
Gray-box testing is beneficial because it takes the straightforward technique of black-box testing and combines it with the code-targeted systems in white-box testing.
Gray-box testing is based on requirement test case generation because it presents all the conditions before the program is tested by using the assertion method. A requirementspecification languageis used to make it easy to understand the requirements and verify its correctness.[5]
Object-oriented software consists primarily of objects; where objects are single indivisible units having executable code and/or data. Some assumptions are stated below which are needed for the application of use gray-box testing.
Cem Kanerdefines "gray-box testing as involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of view of the tester".[9]Gray-box testing techniques are:
The distributed nature ofWeb servicesallows gray-box testing to detect defects within aservice-oriented architecture(SOA). As we know, white-box testing is not suitable for Web services as it deals directly with the internal structures. White-box testing can be used for state art methods; for example, message mutation which generates the automatic tests for large arrays to help exception handling states, flow without source code or binaries. Such a strategy is useful to push gray-box testing nearer to the outcomes of white-box testing.
|
https://en.wikipedia.org/wiki/Gray-box_testing
|
A subsetS{\displaystyle S}of atopological spaceX{\displaystyle X}is called aregular open setif it is equal to theinteriorof itsclosure; expressed symbolically, ifInt(S¯)=S{\displaystyle \operatorname {Int} ({\overline {S}})=S}or, equivalently, if∂(S¯)=∂S,{\displaystyle \partial ({\overline {S}})=\partial S,}whereIntS,{\displaystyle \operatorname {Int} S,}S¯{\displaystyle {\overline {S}}}and∂S{\displaystyle \partial S}denote, respectively, the interior, closure andboundaryofS.{\displaystyle S.}[1]
A subsetS{\displaystyle S}ofX{\displaystyle X}is called aregular closed setif it is equal to the closure of its interior; expressed symbolically, ifIntS¯=S{\displaystyle {\overline {\operatorname {Int} S}}=S}or, equivalently, if∂(IntS)=∂S.{\displaystyle \partial (\operatorname {Int} S)=\partial S.}[1]
IfR{\displaystyle \mathbb {R} }has its usualEuclidean topologythen the open setS=(0,1)∪(1,2){\displaystyle S=(0,1)\cup (1,2)}is not a regular open set, sinceInt(S¯)=(0,2)≠S.{\displaystyle \operatorname {Int} ({\overline {S}})=(0,2)\neq S.}Everyopen intervalinR{\displaystyle \mathbb {R} }is a regular open set and every non-degenerate closed interval (that is, a closed interval containing at least two distinct points) is a regular closed set. A singleton{x}{\displaystyle \{x\}}is a closed subset ofR{\displaystyle \mathbb {R} }but not a regular closed set because its interior is the empty set∅,{\displaystyle \varnothing ,}so thatInt{x}¯=∅¯=∅≠{x}.{\displaystyle {\overline {\operatorname {Int} \{x\}}}={\overline {\varnothing }}=\varnothing \neq \{x\}.}
A subset ofX{\displaystyle X}is a regular open set if and only if its complement inX{\displaystyle X}is a regular closed set.[2]Every regular open set is anopen setand every regular closed set is aclosed set.
Eachclopen subsetofX{\displaystyle X}(which includes∅{\displaystyle \varnothing }andX{\displaystyle X}itself) is simultaneously a regular open subset and regular closed subset.
The interior of a closed subset ofX{\displaystyle X}is a regular open subset ofX{\displaystyle X}and likewise, the closure of an open subset ofX{\displaystyle X}is a regular closed subset ofX.{\displaystyle X.}[2]The intersection (but not necessarily the union) of two regular open sets is a regular open set. Similarly, the union (but not necessarily the intersection) of two regular closed sets is a regular closed set.[2]
The collection of all regular open sets inX{\displaystyle X}forms acomplete Boolean algebra; thejoinoperation is given byU∨V=Int(U∪V¯),{\displaystyle U\vee V=\operatorname {Int} ({\overline {U\cup V}}),}themeetisU∧V=U∩V{\displaystyle U\land V=U\cap V}and the complement is¬U=Int(X∖U).{\displaystyle \neg U=\operatorname {Int} (X\setminus U).}
|
https://en.wikipedia.org/wiki/Regular_closed_set
|
Loaded language[a]isrhetoricused to influence an audience by using words and phrases with strongconnotations. This type of language is very often made vague to more effectivelyinvoke an emotional responseand/or exploitstereotypes.[1][2][3]Loaded words and phrases have significant emotional implications and involve strongly positive or negative reactions beyond theirliteral meaning.
Loaded terms, also known as emotive or ethical words, were clearly described byCharles Stevenson.[4][5][6]He noticed that there are words that do not merely describe a possible state of affairs. "Terrorist" is not used only to refer to a person who commits specific actions with a specific intent. Words such as "torture" or "freedom" carry with them something more than a simple description of a concept or an action.[7]They have a "magnetic" effect, an imperative force, a tendency to influence the interlocutor's decisions.[8]They are strictly bound to moral values leading to value judgements and potentially triggering specific emotions. For this reason, they have an emotive dimension. In the modern psychological terminology, we can say that these terms carry "emotional valence",[9]as they presuppose and generate a value judgement that can lead to an emotion.[10]
The appeal to emotion is in contrast to an appeal tologicandreason. Authors R. Malcolm Murray and Nebojša Kujundžić distinguish "prima faciereasons" from "considered reasons" when discussing this. An emotion, elicited via emotive language, may form aprima faciereason for action, but further work is required before one can obtain aconsideredreason.[2]
Emotive arguments and loaded language are particularly persuasive because they exploit the human weakness for acting immediately based upon an emotional response,withoutsuch further considered judgement. Due to such potential for emotional complication, it is generally advisable to avoid loaded language in argument or speech when fairness and impartiality is one of the goals.Anthony Weston, for example, admonishes students and writers: "In general, avoid language whose only function is to sway the emotions".[1][2]
One aspect of loaded language is that loaded words and phrases occur in pairs, sometimes aspolitical framingtechniques by individuals with opposing agendas. Heller calls these "a Boo! version and a Hooray! version" to differentiate those with negative and positive emotional connotations. Examples includebureaucratversuspublic servant,anti-abortionversuspro-life,regimeversusgovernment, andelitistversusexpert.[11]
Politiciansemploy euphemisms,[12]and study how to use them effectively: which words to use or avoid using to gain political advantage or disparage an opponent. Speechwriter and journalist Richard Heller gives the example that it is common for a politician to advocate "investment in public services," because it has a more favorable connotation than "public spending."[11]
In the 1946 essay "Politics and the English Language",George Orwelldiscussed the use of loaded language in political discourse:
The wordFascismhas now no meaning except in so far as it signifies "something not desirable." The wordsdemocracy,socialism, freedom, patriotic, realistic, justicehave each of them several different meanings which cannot be reconciled with one another. In the case of a word likedemocracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.[13]
|
https://en.wikipedia.org/wiki/Loaded_language
|
Audio visual speech recognition(AVSR) is a technique that usesimage processingcapabilities inlip readingto aidspeech recognitionsystems in recognizing undeterministicphonesor giving preponderance among near probability decisions.
Each system oflip readingandspeech recognitionworks separately, then their results are mixed at the stage offeature fusion. As the name suggests, it has two parts. First one is the audio part and second one is the visual part. In audio part we use features like log mel spectrogram, mfcc etc. from the raw audio samples and we build a model to get feature vector out of it . For visual part generally we use some variant of convolutional neural network to compress the image to a feature vector after that we concatenate these two vectors (audio and visual ) and try to predict the target object.
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Audio-visual_speech_recognition
|
In the physics ofhydrodynamics, aglobal modeof a system is one in which the system executes coherentoscillationsin time. Suppose a quantityy(x,t){\displaystyle y(x,t)}which depends on spacex{\displaystyle x}and timet{\displaystyle t}is governed by somepartial differential equationwhich does not have an explicit dependence ont{\displaystyle t}. Then a global mode is a solution of this PDE of the formy(x,t)=y^(x)eiωt{\displaystyle y(x,t)={\hat {y}}(x)e^{i\omega t}}, for somefrequencyω{\displaystyle \omega }. Ifω{\displaystyle \omega }is complex, then the imaginary part corresponds to the mode exhibitingexponential growthorexponential decay.
The concept of a global mode can be compared to that of anormal mode; the PDE may be thought of as adynamical systemof infinitely many equations coupled together. Global modes are used in thestability analysisofhydrodynamical systems.Philip Drazinintroduced the concept of a global mode in his 1974 paper, and gave a technique for finding the normal modes of a linear PDE problem in which the coefficients or geometry vary slowly inx{\displaystyle x}. This technique is based on theWKBJ approximation, which is a special case ofmultiple-scale analysis.[1]His method extends theBriggs–Bers technique, which gives a stability analysis for linear PDEs with constant coefficients.[2]
Since Drazin's 1974 paper, other authors have studied more realistic problems in fluid dynamics using a global mode analysis. Such problems are often highlynonlinear, and attempts to analyse them have often relied on laboratory or numerical experiment.[2]Examples of global modes in practice include the oscillatorywakesproduced when fluid flows past an object, such as avortex street.
|
https://en.wikipedia.org/wiki/Global_mode
|
Instatistics, avarimax rotationis used to simplify the expression of a particular sub-space in terms of just a few major items each. The actual coordinate system is unchanged, it is theorthogonalbasis that is being rotated to align with those coordinates. The sub-space found withprincipal component analysisorfactor analysisis expressed as a dense basis with many non-zero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of thevariancesof the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the sub-space invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but near-zero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have near-zero loadings on this factor. If these conditions hold, the factor loading matrix is said to have "simple structure," and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual—that is, each individual can be well described by alinear combinationof only a few basis functions.
One way of expressing the varimax criterion formally is this:
Suggested byHenry Felix Kaiserin 1958,[1]it is a popular scheme for orthogonal rotation (where all factors remain uncorrelated with one another).
A summary of the use of varimax rotation and of other types of factor rotation is presented inthis article on factor analysis.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Varimax_rotation
|
The following is a list ofstatistical software.
|
https://en.wikipedia.org/wiki/List_of_statistical_software
|
Avirtual assistant(typically abbreviated toVA, also called avirtual office assistant)[1]is generally self-employed and providesprofessionaladministrative, technical, or creative (social) assistance to clients remotely from ahome office.[2]Because virtual assistants are independent contractors rather than employees, clients are not responsible for any employee-related taxes, insurance, or benefits, except in the context that those indirect expenses are included in the VA's fees. Clients also avoid the logistical problem of providing extra office space, equipment, or supplies. Clients pay for 100% productive work and can work with virtual assistants, individually, or in multi-VA firms to meet their exact needs. Virtual assistants usually work for othersmall businesses[3]but can also support busy executives. It is estimated that there are as few as 5,000 to 10,000 or as many as 25,000 virtual assistants worldwide. The profession is growing in centralized economies with "fly-in fly-out" staffing practices.[4][5][6]
In terms of pay, according to Glassdoor, the annual salary for virtual assistants in the US is $35,922.[7]However, worldwide, many virtual assistants work as freelancers on an hourly wage. One recent survey involving 400 virtual assistants on the popular freelancer siteUpworkshows a huge discrepancy in hourly pay commanded by virtual assistants in different countries.[8]
Common modes of communication and data delivery include the internet, e-mail and phone-call conferences,[9]online workspaces, and fax machine. Increasingly, virtual assistants are utilizing technology such asSkypeand Zoom, Slack, as well asGoogle Voice. Professionals in this business work on a contractual basis, and long-lasting cooperation is standard. Typically, office administrative experience is expected in positions such as executive assistant, office manager/supervisor, secretary, legal assistant, paralegal, legal secretary, real estate assistant, and information technology.
In recent years, virtual assistants have also worked their way into many mainstream businesses, and with the advent ofVOIPservices such as Skype and Zoom, it has been possible to have a virtual assistant who can answer phone calls remotely, without the end user's knowledge. This allows businesses to add a personal touch in the form of a receptionist, without the additional cost of hiring someone.[citation needed]
Virtual assistants consist of individuals as well as companies who work remotely as independent professionals, providing a wide range of products and services, both to businesses and consumers. Virtual assistants perform many different roles, including typical secretarial work, website editing, social media marketing, customer service, data entry, accounts (MYOB, Quick books), and many other remote tasks.
Virtual assistants come from a variety of business backgrounds, but most have several years' experience earned in the "real" (non-virtual) business world, or several years' experience working online or remotely.
A dedicated virtual assistant is someone working in the office under the management of a company. The facility and internet connection as well as training are provided by the company, though not in all cases. The home-based virtual assistant works either in anoffice sharingenvironment or from home. General VAs are sometimes called an online administrative assistant, online personal assistant, or online sales assistant. A virtual webmaster assistant, virtual marketing assistant, and virtual content writing assistant are specific professionals that are usually experienced employees from corporate environments who have set up their own virtual offices.
Virtual assistants were an integral part of the 2007 bestselling bookThe 4-Hour WorkweekbyTim Ferriss.[10]Ferriss claimed to have hired virtual assistants to check his email, pay his bills, and run parts of his company.[11]
|
https://en.wikipedia.org/wiki/Virtual_assistant_(occupation)
|
Virtual collective consciousness(VCC) is a term rebooted and promoted by two behavioral scientists, Yousri Marzouki and Olivier Oullier in their 2012Huffington Postarticle titled: "Revolutionizing Revolutions: Virtual Collective Consciousness and theArab Spring",[1]after its first appearance in 1999-2000.[2]VCC is now defined as an internal knowledge catalyzed bysocial mediaplatforms and shared by a plurality of individuals driven by the spontaneity, the homogeneity, and the synchronicity of their online actions.[3]VCC occurs when a large group of persons, brought together by a social media platform think and act with one mind and share collective emotions.[4]Thus, they are able to coordinate their efforts efficiently, and could rapidly spread their word to a worldwide audience.[5]When interviewed about the concept of VCC that appeared in the book -Hyperconnectivity and the Future of Internet Communication- he edited,[6]Professor ofPervasive Computing,Adrian David Cheokmentioned the following: "The idea of a global (collective) virtual consciousness is a bottom-up process and a rather emergent property resulting from a momentum of complex interactions taking place in social networks. This kind of collective behaviour (or intelligence) results from a collision between a physical world and a virtual world and can have a real impact in our life by driving collective action."[7]
In 1999-2000, Richard Glen Boire[2]provided a cursory mention and the only occurrence of the term[citation needed][original research?]"Virtual collective consciousness" in his text as follows:
The trend of technology is to overcome the limitations of the human body. And, the Web has been characterized as a virtual collective consciousness and unconsciousness
The recent definition of VCC evolved from the first empirical study that provided a cyberpsychological insight into the contribution of Facebook to the 2011Tunisian revolution. In this study, the concept was originally called "collective cyberconsciousness".[8]The latter is an extension of the idea of "collective consciousness" coupled with "citizen media" usage. The authors of this study also made a parallel between this original definition of VCC and other comparable concepts such as Durkheim's collective representation,Žižek's "collective mind"[9]or Boguta's "new collective consciousness" that he used to describe the computational history of the Internet shutdown during theEgyptian revolution.[10]Since VCC is the byproduct of the network's successful actions, then these actions must be timely, acute, rapid, domain-specific, and purpose-oriented to successfully achieve their goal. Before reaching a momentum of complexity, each collective behavior starts by a spark that triggers a chain of events leading to a crystallized stance of a tremendous amount of interactions.[11]Thus, VCC is an emergent global pattern from these individual actions.
In 2012, the term virtual collective consciousness resurfaced and was brought to light after extending its applications to the Egyptian case and the whole social networking major impact on the success of the so-calledArab Spring.[1][12]Moreover, the acronym VCC was suggested to identify the theoretical framework covering on-line behaviors leading to a virtual collective consciousness. Hence, online social networks have provided a new and faster way of establishing or modifying "collective consciousness" that was paramount to the 2011 uprisings in the Arab world.[13][14]
Various theoretical references ranging from sociology to computer science were mentioned in order to account for the key features that render the framework for a virtual collective consciousness. The following list is not exhaustive, but the references it contains are often highlighted:
Besides the studied effect of social networking on the Tunisian and Egyptian revolutions, the former via Facebook and the latter via Twitter other applications were studied under the prism of VCC framework:
|
https://en.wikipedia.org/wiki/Virtual_collective_consciousness
|
Master Passwordis a type ofalgorithmfirst implemented byMaarten Billemontfor creating uniquepasswordsin a reproducible manner. It differs from traditionalpassword managersin that the passwords are not stored on disk or in the cloud, but are regenerated every time from information entered by the user: Their name, amaster password, and a unique identifier for the service the password is intended for (usually the URL).[1]
By not storing the passwords anywhere, this approach makes it harder for attackers to steal or intercept them. It also removes the need for synchronization between devices, backups of potential password databases and risks ofdata breach. This is sometimes calledsync-less password management.
Billemont's implementation involves the following parameters:[1]
In Billemont's implementation, the master key is a global 64-byte secret key generated from the user's secretmaster passwordand salted by their full name. The salt is used to avoid attacks based onrainbow tables. Thescryptalgorithm, an intentionally slowkey derivation function, is used for generating the master key to make abrute-force attackinfeasible.
The template seed is a site-specific secret in binary form, generated from the master key, the site name and the counter using theHMAC-SHA256algorithm. It is later converted to a character string using the password templates. The template seed makes every password unique to the website and to the user.
The binary template seed is then converted to one of six available password types. The default type is theMaximum Security Password, others can be selected if the service's password policy does not allow passwords of that format:
Billemont also created multiplefree softwareimplementations of the Master Password algorithm, licensed under theGPLv3.:[2]
Official website
|
https://en.wikipedia.org/wiki/Master_Password_(algorithm)
|
This is aglossary of graph theory.Graph theoryis the study ofgraphs, systems of nodes orverticesconnected in pairs by lines oredges.
|
https://en.wikipedia.org/wiki/Interval_(graph_theory)
|
Causality: Models, Reasoning, and Inference(2000;[1]updated 2009[2]) is a book byJudea Pearl.[3]It is an exposition and analysis ofcausality.[4][5]It is considered to have been instrumental in laying the foundations of the modern debate oncausal inferencein several fields includingstatistics,computer scienceandepidemiology.[6]In this book, Pearl espouses the Structural Causal Model (SCM) that usesstructural equation modeling.[7]This model is a competing viewpoint to theRubin causal model. Some of the material from the book was reintroduced in the more general-audience targetingThe Book of Why.
Pearl succeeds in bringing together in a general nonparametric framework the counterfactual tradition of causal analysis with the variants of structural equation modeling worth keeping. The graph theory that he uses to accomplish this fusion is often elegant. Thus, Causality is a major statement, which all who claim to know what causality is must read.
The book earnt Pearl the 2001Lakatos Awardin Philosophy of Science.[9]
|
https://en.wikipedia.org/wiki/Causality_(book)
|
Varioussoftware package metricsare used inmodular programming. They have been mentioned byRobert Cecil Martinin his 2002 bookAgile software development: principles, patterns, and practices.
The termsoftware packagehere refers to a group of relatedclassesinobject-oriented programming.
|
https://en.wikipedia.org/wiki/Software_package_metrics
|
Plackett–Burman designsareexperimental designspresented in 1946 byRobin L. PlackettandJ. P. Burmanwhile working in the BritishMinistry of Supply.[1]Their goal was to find experimental designs for investigating the dependence of some measured quantity on a number ofindependent variables(factors), each takingLlevels, in such a way as to minimize thevarianceof the estimates of these dependencies using a limited number of experiments. Interactions between the factors were considered negligible. The solution to this problem is to find an experimental design whereeach combinationof levels for anypairof factors appears thesame number of times, throughout all the experimental runs (refer to table). A completefactorial designwould satisfy this criterion, but the idea was to find smaller designs.
For the case of two levels (L= 2), Plackett and Burman used themethodfound in 1933 byRaymond Paleyfor generatingorthogonal matriceswhose elements are all either 1 or −1 (Hadamard matrices). Paley's method could be used to find such matrices of sizeNfor mostNequal to a multiple of 4. In particular, it worked for all suchNup to 100 exceptN= 92. IfNis a power of 2, however, the resulting design is identical to afractional factorial design, so Plackett–Burman designs are mostly used whenNis a multiple of 4 but not a power of 2 (i.e.N= 12, 20, 24, 28, 36 …).[3]If one is trying to estimate less thanNparameters (including the overall average), then one simply uses a subset of the columns of the matrix.
For the case of more than two levels, Plackett and Burman rediscovered designs that had previously been given byRaj Chandra BoseandK. Kishenat theIndian Statistical Institute.[4]Plackett and Burman give specifics for designs having a number of experiments equal to the number of levelsLto some integer power, forL= 3, 4, 5, or 7.
When interactions between factors are not negligible, they are often confounded in Plackett–Burman designs with the main effects, meaning that the designs do not permit one to distinguish between certain main effects and certain interactions. This is calledconfounding.
In 1993, Dennis Lin described a construction method via half-fractions of Plackett–Burman designs, using one column to take half of the rest of the columns.[5]The resulting matrix, minus that column, is a "supersaturated design"[6]for finding significant first order effects, under the assumption that few exist.
Box–Behnkendesigns can be made smaller, or very large ones constructed, by replacing thefractional factorialsandincomplete blockstraditionally used for plan and seed matrices, respectively, with Plackett–Burmans. For example, a quadratic design for 30 variables requires a 30 column PB plan matrix of zeroes and ones, replacing the ones in each line using PB seed matrices of −1s and +1s (for 15 or 16 variables) wherever a one appears in the plan matrix, creating a 557 runs design with values, −1, 0, +1, to estimate the 496 parameters of a full quadratic model. Addingaxial pointsallows estimating univariate cubic and quartic effects.
By equivocating certain columns with parameters to be estimated, Plackett–Burmans can also be used to construct mixed categorical and numerical designs, with interactions or high order effects, requiring no more than 4 runs more than the number of model parameters to be estimated. Sort bya-1columns assigned to categorical variableAand following columns, whereA= 1 + int(a·i/(max(i) + 0.00001)),i= row number anda=A'snumber of values. Next sort on columns assigned to any other categorical variables and following columns, repeating as needed. Such designs, if large, may otherwise be incomputable by standard search techniques likeD-optimality. For example, 13 variables averaging 3 values each could have well over a million combinations to search. To estimate the 105 parameters in a quadratic model of 13 variables, one must formally exclude from consideration or compute |X'X| for well over 106C102, i.e. 313C105, or roughly 10484matrices.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Plackett%E2%80%93Burman_design
|
Distributional semantics[1]is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-calleddistributionalhypothesis:linguistic items with similar distributions have similar meanings.
Thedistributional hypothesisinlinguisticsis derived from thesemantic theoryof language usage, i.e. words that are used and occur in the samecontextstend to purport similar meanings.[2]
The underlying idea that "a word is characterized by the company it keeps" was popularized byFirthin the 1950s.[3]
The distributional hypothesis is the basis forstatistical semantics. Although the Distributional Hypothesis originated in linguistics,[4][5]it is now receiving attention incognitive scienceespecially regarding the context of word use.[6]
In recent years, the distributional hypothesis has provided the basis for the theory ofsimilarity-based generalizationin language learning: the idea that children can figure out how to use words they've rarely encountered before by generalizing about their use from distributions of similar words.[7][8]
The distributional hypothesis suggests that the more semantically similar two words are, the more distributionally similar they will be in turn, and thus the more that they will tend to occur in similar linguistic contexts.
Whether or not this suggestion holds has significant implications for both thedata-sparsityproblem in computational modeling,[9]and for the question of how children are able to learn language so rapidly given relatively impoverished input (this is also known as the problem of thepoverty of the stimulus) is unclear.
Distributional semantics favor the use of linear algebra as a computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity.[10]Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors:topicalsimilarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in;paradigmaticsimilarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with. Note that the latter type of vectors can also be used to extractsyntagmaticsimilarities by looking at the individual vector components.
The basic idea of a correlation between distributional and semantic similarity can be operationalized in many different ways. There is a rich variety of computational models implementing distributional semantics, includinglatent semantic analysis(LSA),[11][12]Hyperspace Analogue to Language(HAL), syntax- or dependency-based models,[13]random indexing,semantic folding[14]and various variants of thetopic model.[15]
Distributional semantic models differ primarily with respect to the following parameters:
Distributional semantic models that use linguistic items as context have also been referred to asword space, or vector space models.[17][18]
While distributional semantics typically has been applied to lexical items—words and multi-word terms—with considerable success, not least due to its applicability as an input layer for neurally inspired deep learning models,lexical semantics, i.e. the meaning of words, will only carry part of the semantics of an entire utterance. The meaning of a clause, e.g."Tigers love rabbits.", can only partially be understood from examining the meaning of the three lexical items it consists of. Distributional semantics can straightforwardly be extended to cover larger linguistic item such as constructions, with and without non-instantiated items, but some of the base assumptions of the model need to be adjusted somewhat.Construction grammarand its formulation of the lexical-syntactic continuum offers one approach for including more elaborate constructions in a distributional semantic model and some experiments have been implemented using the Random Indexing approach.[19]
Compositional distributional semanticmodels extend distributional semantic models by explicit semantic functions that use syntactically based rules to combine the semantics of participating lexical units into acompositional modelto characterize the semantics of entire phrases or sentences. This work was originally proposed by Stephen Clark,Bob Coecke, andMehrnoosh SadrzadehofOxford Universityin their 2008 paper, "A Compositional Distributional Model of Meaning".[20]Different approaches to composition have been explored—including neural models—and are under discussion at established workshops such asSemEval.[21]
Distributional semantic models have been applied successfully to the following tasks:
|
https://en.wikipedia.org/wiki/Distributional_semantics
|
Text normalizationis the process of transformingtextinto a singlecanonical formthat it might not have had before. Normalizing text before storing or processing it allows forseparation of concerns, since input is guaranteed to be consistent before operations are performed on it. Text normalization requires being aware of what type of text is to be normalized and how it is to be processed afterwards; there is no all-purpose normalization procedure.[1]
Text normalization is frequently used when convertingtext to speech.Numbers,dates,acronyms, andabbreviationsare non-standard "words" that need to be pronounced differently depending on context.[2]For example:
Text can also be normalized for storing and searching in a database. For instance, if a search for "resume" is to match the word "résumé," then the text would be normalized by removingdiacritical marks; and if "john" is to match "John", the text would be converted to a singlecase. To prepare text for searching, it might also bestemmed(e.g. converting "flew" and "flying" both into "fly"),canonicalized(e.g. consistently usingAmerican or British English spelling), or havestop wordsremoved.
For simple, context-independent normalization, such as removing non-alphanumericcharacters ordiacritical marks,regular expressionswould suffice. For example, thesedscriptsed ‑e "s/\s+/ /g"inputfilewould normalize runs ofwhitespace charactersinto a single space. More complex normalization requires correspondingly complicated algorithms, includingdomain knowledgeof the language and vocabulary being normalized. Among other approaches, text normalization has been modeled as a problem of tokenizing and tagging streams of text[5]and as a special case of machine translation.[6][7]
In the field oftextual scholarshipand the editing of historic texts, the term "normalization" implies a degree of modernization and standardization – for example in the extension ofscribal abbreviationsand the transliteration of the archaicglyphstypically found in manuscript and early printed sources. Anormalized editionis therefore distinguished from adiplomatic edition(orsemi-diplomatic edition), in which some attempt is made to preserve these features. The aim is to strike an appropriate balance between, on the one hand, rigorous fidelity to the source text (including, for example, the preservation of enigmatic and ambiguous elements); and, on the other, producing a new text that will be comprehensible and accessible to the modern reader. The extent of normalization is therefore at the discretion of the editor, and will vary. Some editors, for example, choose to modernize archaic spellings and punctuation, but others do not.[8]
|
https://en.wikipedia.org/wiki/Text_normalization
|
The study ofinterdependent networksis a subfield ofnetwork sciencedealing with phenomena caused by the interactions betweencomplex networks. Though there may be a wide variety of interactions between networks,dependencyfocuses on the scenario in which the nodes in one network require support from nodes in another network.[1]
In nature, networks rarely appear in isolation. They are typically elements in larger systems and can have non-trivial effects on one another. For example, infrastructure networks exhibit interdependency to a large degree. The power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. Thus the deactivation of a critical number of nodes in either the power network or the communication network can lead to a series of cascading failures across the system with potentially catastrophic repercussions. If the two networks were treated in isolation, this importantfeedbackeffect would not be seen and predictions of network robustness would be greatly overestimated.
Links in a standard network representconnectivity, providing information about how one node can be reached from another.Dependencylinks represent a need for support from one node to another. This relationship is often, though not necessarily, mutual and thus the links can be directed or undirected. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to.
Instatistical physics,phase transitionscan only appear in many particle systems. Though phase transitions are well known in network science, in single networks they are second order only. With the introduction of internetwork dependency, first order transitions emerge. This is a new phenomenon and one with profound implications for systems engineering. Where system dissolution takes place after steady (if steep) degradation for second order transitions, the existence of a first order transition implies that the system can go from a relatively healthy state to complete collapse with no advanced warning.
|
https://en.wikipedia.org/wiki/Interdependent_networks
|
Instatisticsand, in particular, in the fitting oflinearorlogistic regressionmodels, theelastic netis aregularizedregression method thatlinearly combinestheL1andL2penalties of thelassoandridgemethods.
Nevertheless, elastic net regularization is typically more accurate than both methods with regard to reconstruction.[1]
The elastic net method overcomes the limitations of theLASSO(least absolute shrinkage and selection operator) method which uses a penalty function based on
Use of this penalty function has several limitations.[2]For example, in the "largep, smalln" case (high-dimensional data with few examples), the LASSO selects at mostnvariables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part (‖β‖2{\displaystyle \|\beta \|^{2}}) to the penalty, which when used alone isridge regression(known also asTikhonov regularization).
The estimates from the elastic net method are defined by
The quadratic penalty term makes the loss function strongly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression: in other words, each of them is a special case whereλ1=λ,λ2=0{\displaystyle \lambda _{1}=\lambda ,\lambda _{2}=0}orλ1=0,λ2=λ{\displaystyle \lambda _{1}=0,\lambda _{2}=\lambda }. Meanwhile, the naive version of elastic net method finds an estimator in a two-stage procedure : first for each fixedλ2{\displaystyle \lambda _{2}}it finds the ridge regression coefficients, and then does a LASSO type shrinkage. This kind of estimation incurs a double amount of shrinkage, which leads to increased bias and poor predictions. To improve the prediction performance, sometimes the coefficients of the naive version of elastic net is rescaled by multiplying the estimated coefficients by(1+λ2){\displaystyle (1+\lambda _{2})}.[2]
Examples of where the elastic net method has been applied are:
It was proven in 2014 that the elastic net can be reduced to the linearsupport vector machine.[7]A similar reduction was previously proven for the LASSO in 2014.[8]The authors showed that for every instance of the elastic net, an artificial binary classification problem can be constructed such that the hyper-plane solution of a linearsupport vector machine(SVM) is identical to the solutionβ{\displaystyle \beta }(after re-scaling). The reduction immediately enables the use of highly optimized SVM solvers for elastic net problems. It also enables the use ofGPUacceleration, which is often already used for large-scale SVM solvers.[9]The reduction is a simple transformation of the original data and regularization constants
into new artificial data instances and a regularization constant that specify a binary classification problem and the SVM regularization constant
Here,y2{\displaystyle y_{2}}consists of binary labels−1,1{\displaystyle {-1,1}}. When2p>n{\displaystyle 2p>n}it is typically faster to solve the linear SVM in the primal, whereas otherwise the dual formulation is faster.
Some authors have referred to the transformation as Support Vector Elastic Net (SVEN), and provided the following MATLAB pseudo-code:
|
https://en.wikipedia.org/wiki/Elastic_net_regularization
|
Justification(also calledepistemic justification) is a property ofbeliefsthat fulfill certain norms about what a person should believe.[1][2]Epistemologistsoften identify justification as a component of knowledge distinguishing it from mere true opinion.[3]They study the reasons why someone holds a belief.[4]Epistemologists are concerned with various features of belief, which include the ideas of warrant (a proper justification for holding a belief),knowledge,rationality, andprobability, among others.
Debates surrounding epistemic justification often involve thestructureof justification, including whether there are foundational justified beliefs or whether merecoherenceis sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might includeperceptual experience(the evidence of the senses),reason, and authoritativetestimony, among others.
"Justification" involves the reasons why someone holds abeliefthat oneshouldhold based on one's current evidence.[4]Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold.
Many philosophers from Plato onward have treated "justified true belief" (JTB) as constituting knowledge. It is particularly associated with a theory discussed in his dialoguesMenoandTheaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end ofTheaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of theGettier problem.[4]
The subject of justification has played a major role in the value of knowledge as "justified true belief".[citation needed]Some contemporary epistemologists, such asJonathan Kvanvig, assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth.[citation needed]
William P. Alstonidentifies two conceptions of justification.[5]: 15–16One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds toepistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds toepistemic externalism.
There are several different views as to what entails justification, mostly focusing on the question "How beliefs are justified?". Differenttheories of justificationrequire different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge.
Notable theories of justification include:
Robert Fogelinclaims to detect a suspicious resemblance between the theories of justification andAgrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes ofPyrrhonian skepticism.[6]
William P. Alstoncriticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition."[5]: 22
|
https://en.wikipedia.org/wiki/Theory_of_justification
|
Anapostolic nunciatureis a top-leveldiplomatic missionof theHoly Seethat is equivalent to anembassy. However, it neither issues visas nor hasconsulates.
The head of the apostolic nunciature is called anuncio, an ecclesiastical diplomatic title. A papal nuncio (officially known as an apostolic nuncio) is a permanent diplomatic representative (head of diplomatic mission) of the Holy See to a state or to one of two international intergovernmental organizations, theEuropean UnionorASEAN, having the rank of anambassadorextraordinary and plenipotentiary, and the ecclesiastical rank oftitulararchbishop. Papal representatives to other intergovernmental organizations are known as "permanent observers" or "delegates".
In several countries that have diplomatic relations with the Holy See, the apostolic nuncio isipso factothedean of the diplomatic corps. The nuncio is, in such a country, first in theorder of precedenceamong all the diplomats accredited to the country, and he speaks for the diplomatic corps in matters of diplomatic privilege and protocol. Most countries that concede priority to the nuncio are officially Catholic, but some are not.
In addition, the nuncio serves as the liaison between the Holy See and the Church in that particular nation. The nuncio has an important role in the selection of bishops.
The pope accredits diplomats with the following states and other subjects of international law (list as per January 2010):[2]
Algeria,Angola,Benin,Burkina Faso,Burundi,Botswana,Cameroon,Cape Verde,Central African Republic,Chad,Congo (Republic of),Congo (Democratic Republic of),Côte d'Ivoire,Djibouti,Egypt,Equatorial Guinea,Eritrea,Ethiopia,Gabon,Gambia,Ghana,Guinea,Guinea-Bissau,Kenya,Lesotho,Liberia,Libya,Madagascar,Malawi,Mali,Mauritius,Morocco,Mozambique,Namibia,Niger,Nigeria,Rwanda,São Tomé and Príncipe,Sénégal,Seychelles,Sierra Leone,South Africa,Sudan,Swaziland,Tanzania,Togo,Tunisia,Uganda,Zambia,Zimbabwe
Antigua and Barbuda,Argentina, Bahamas, Barbados, Belize,Bolivia,Brazil,Canada,Chile,Colombia,Costa Rica, Cuba, Dominica,Dominican Republic, Ecuador, El Salvador, Grenada, Guatemala, Guyana, Haiti, Honduras, Jamaica,México, Nicaragua, Panama, Paraguay,Peru, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and Grenadines, Suriname,Trinidad and Tobago,United States of America, Uruguay,Venezuela
Bahrain,Bangladesh, Cambodia,Republic of China (Taiwan), East Timor,India,Indonesia,Iran,Iraq,Israel,Japan, Jordan, Kazakhstan, Korea[which?], Kuwait,Kyrgyzstan,Lebanon, Malaysia, Mongolia,Nepal,Pakistan,Philippines, Qatar, Singapore, Sri Lanka, Syria, Tajikistan,Thailand, Turkmenistan, United Arab Emirates, Uzbekistan, Vietnam (Resident), Yemen.
Albania, Andorra, Armenia,Austria, Azerbaijan, Belarus,Belgium, Bosnia-Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Estonia, European Union,France, Georgia,Germany,Great Britain, Greece, Hungary,Ireland,Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Moldova, Monaco, Montenegro,The Netherlands,Nordic Countries,Poland,Portugal, Romania,Russia, San Marino, Serbia, Slovakia, Slovenia,Spain, Switzerland, Turkey,Ukraine
Australia, the Cook Islands, Fiji, Guam, Kiribati, Marshall Islands, Micronesia (Federated States of), Nauru, New Zealand, Palau, Papua New Guinea, Samoa, Solomon Islands, Tonga, Vanuatu.
An apostolic delegate may be sent to liaison between the Catholic Church and a country with which the Holy See has no diplomatic ties, though not accredited to the government of the country. Apostolic delegates have no formal diplomatic status, though in some countries they have some diplomatic privileges.
|
https://en.wikipedia.org/wiki/Apostolic_nunciature
|
Electronic serial numbers(ESNs) were created by the U.S.Federal Communications Commission(FCC) to uniquely identifymobile devices, from the days ofAMPSin the United States starting in the early 1980s. The administrative role was taken over by theTelecommunications Industry Associationin 1997 and is still maintained by them. ESNs are currently mainly used withCDMAphones (and were previously used byAMPSandTDMAphones), compared toInternational Mobile Equipment Identity(IMEI) numbers used by allGSMphones.[1]
The first eight bits of the ESN were originally the manufacturer code, leaving 24 bits for the manufacturer to assign up to 16,777,215 codes to mobiles. To allow more than 256 manufacturers to be identified, the manufacturer code was extended to 14 bits, leaving 18 bits for the manufacturer to assign up to 262,144 codes. Manufacturer code 0x80 is reserved from assignment and is used instead as an eight-bit prefix for pseudo-ESNs (pESN). The remaining 24 bits are the least significant bits of theSHA-1hash of amobile equipment identifier(MEID). Pseudo-ESNs are not guaranteed to be unique (the MEID is the unique identifier if the phone has a pseudo-ESN).
ESNs are often represented as either 11-digit decimal numbers or 8-digit hexadecimal numbers. For the decimal format the first three digits are the decimal representation of the first eight bits (between 00 and 255 inclusive) and the next eight digits are derived from the remaining 24 bits and will be between 0000000 and 16777215 inclusive. The decimal format of pseudo ESNs will therefore begin with 128. The decimal format separately displays eight bit manufacturer codes in the first three digits, but 14 bit codes are not displayed as separate digits. The hexadecimal format displays an ESN as eight digits and also does not separately display 14 bit manufacturer codes which occupy 3.5 hexadecimal digits.
As ESNs have essentially run out, a new serial number format,MEID, was created by3GPP2and was first implemented by Verizon in 2006. MEIDs are 56 bits long, the same length as the IMEI and, in fact, MEID was created to be a superset of IMEI. The main difference between MEID and IMEI is that the MEID allows hexadecimal digits while IMEI allows only decimal digits – "IMEI shall consist of decimal digits (0 through 9) only".[2]
The last of the previously unused ESN codes were allocated in November 2008.[3]Applications for assignments were accepted until June 30, 2010 using reclaimed ESN codes, those previously assigned toAMPSorTDMAphones and therefore not present onCDMA2000systems. Reclaimed codes have also been used forUIMIDassignments. Codes are assigned according to industry guidelines.[4]
Although ESN assignments may still occur in the future based on applications received before June 30, 2010, there have not been any assignments made since December 31, 2010.
|
https://en.wikipedia.org/wiki/Electronic_Serial_Number
|
Inmathematics, aclassification theoremanswers theclassificationproblem: "What are the objects of a given type, up to someequivalence?". It gives a non-redundantenumeration: each object is equivalent to exactly one class.
A few issues related to classification are the following.
There exist manyclassification theoremsinmathematics, as described below.
|
https://en.wikipedia.org/wiki/Classification_theorem
|
In mostUnixandUnix-like operating systems, theps(process status) program displays the currently-runningprocesses. The related Unix utilitytopprovides a real-time view of the running processes.
KolibriOSincludes an implementation of thepscommand.[1]Thepscommand has also been ported to theIBM ioperating system.[2]InWindows PowerShell,psis a predefinedcommand aliasfor theGet-Processcmdlet, which essentially serves the same purpose.
Users canpipelinepswith other commands, such aslessto view the process status output one page at a time:
Users can also utilize thepscommand in conjunction with thegrepcommand (see thepgrepandpkillcommands) to find information about a single process, such as its id:
The use ofpgrepsimplifies the syntax and avoids potential race conditions:
To see every process running as root in user format:
* = Often abbreviated
pshas many options. Onoperating systemsthat support theSUSandPOSIXstandards,pscommonly runs with the options-ef, where "-e" selectsevery process and "-f" chooses the "full" output format. Another common option on these systems is-l, which specifies the "long" output format.
Most systems derived fromBSDfail to accept the SUS and POSIX standard options because of historical conflicts. (For example, the "e" or "-e" option will displayenvironment variables.) On such systems,pscommonly runs with the non-standard optionsaux, where "a" lists all processes on aterminal, including those of other users, "x" lists all processes withoutcontrolling terminalsand "u" adds a column for the controlling user for each process. For maximum compatibility, there is no "-" in front of the "aux". "ps auxww" provides complete information about the process, including all parameters.
|
https://en.wikipedia.org/wiki/Ps_(Unix)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3