text
stringlengths
0
3.33k
source
stringlengths
32
121
inmetadata, the termdata elementis an atomic unit of data that has precise meaning or precise semantics. a data element has : data elements usage can be discovered by inspection ofsoftware applicationsor applicationdata filesthrough a process of manual or automatedapplication discovery and understanding. once data elements are discovered they can be registered in ametadata registry. intelecommunications, the termdata elementhas the following components : in the areas ofdatabasesanddata systemsmore generally a data element is a concept forming part of adata model. as an element of data representation, a collection of data elements forms adata structure. [ 1 ] in practice, data elements ( fields, columns, attributes, etc. ) are sometimes " overloaded ", meaning a given data element will have multiple potential meanings. while a known bad practice, overloading is nevertheless a very real factor or barrier to understanding what a system is doing. thisdatabase - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Data_element
in themathematicalfield offourier analysis, theconjugate fourier seriesarises by realizing the fourier series formally as the boundary values of thereal partof aholomorphic functionon theunit disc. theimaginary partof that function then defines the conjugate series. zygmund ( 1968 ) studied the delicate questions of convergence of this series, and its relationship with thehilbert transform. in detail, consider atrigonometric seriesof the form in which the coefficientsanandbnarereal numbers. this series is the real part of thepower series along theunit circlewithz = eiθ { \ displaystyle z = e ^ { i \ theta } }. the imaginary part off ( z ) is called theconjugate seriesoff, and is denoted thismathematical analysis – related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Conjugate_Fourier_series
inmathematics, aquotient categoryis acategoryobtained from another category by identifying sets ofmorphisms. formally, it is aquotient objectin thecategory of ( locally small ) categories, analogous to aquotient grouporquotient space, but in the categorical setting. letcbe a category. acongruence relationroncis given by : for each pair of objectsx, yinc, anequivalence relationrx, yon hom ( x, y ), such that the equivalence relations respect composition of morphisms. that is, if are related in hom ( x, y ) and are related in hom ( y, z ), theng1f1andg2f2are related in hom ( x, z ). given a congruence relationroncwe can define thequotient categoryc / ras the category whose objects are those ofcand whose morphisms areequivalence classesof morphisms inc. that is, composition of morphisms inc / riswell - definedsinceris a congruence relation. there is a natural quotientfunctorfromctoc / rwhich sends each morphism to its equivalence class. this functor is bijective on objects and surjective on hom - sets ( i. e. it is afull functor ). every functorf : c→ddetermines a congruence oncby sayingf ~ gifff ( f ) = f ( g ). the functorfthen factors through the quotient functorc→c / ~ in a unique manner. this may be regarded as the " first isomorphism theorem " for categories. ifcis anadditive categoryand we require the congruence relation ~ oncto be additive ( i. e. iff1, f2, g1andg2are morphisms fromxtoywithf1 ~ f2andg1 ~ g2, thenf1 + g1 ~ f2 + g2 ), then the quotient categoryc / ~ will also be additive, and the quotient functorc→c / ~ will be an additive functor. the concept of an additive congruence relation is equivalent to the concept of atwo - sided ideal of morphisms : for any two objectsxandywe are given an
https://en.wikipedia.org/wiki/Quotient_category
categoryc / ~ will also be additive, and the quotient functorc→c / ~ will be an additive functor. the concept of an additive congruence relation is equivalent to the concept of atwo - sided ideal of morphisms : for any two objectsxandywe are given an additive subgroupi ( x, y ) of homc ( x, y ) such that for allf∈i ( x, y ), g∈ homc ( y, z ) andh∈ homc ( w, x ), we havegf∈i ( x, z ) andfh∈i ( w, y ). two morphisms in homc ( x, y ) are congruent iff their difference is ini ( x, y ). every unitalringmay be viewed as an additive category with a single object, and the quotient of additive categories defined above coincides in this case with the notion of aquotient ringmodulo a two - sided ideal. thelocalization of a categoryintroduces new morphisms to turn several of the original category ' s morphisms into isomorphisms. this tends to increase the number of morphisms between objects, rather than decrease it as in the case of quotient categories. but in both constructions it often happens that two objects become isomorphic that weren ' t isomorphic in the original category. theserre quotientof anabelian categoryby aserre subcategoryis a new abelian category which is similar to a quotient category but also in many cases has the character of a localization of the category.
https://en.wikipedia.org/wiki/Quotient_category
" talking past each other " is an english phrase describing the situation where two or more people talk about different subjects, while believing that they are talking about the same thing. [ 1 ] david horton writes that when characters in fiction talk past each other, the effect is to expose " an unbridgeable gulf between their respective perceptions and intentions. the result is an exchange, but never an interchange, of words in fragmented and cramped utterances whose subtext often reveals more than their surface meaning. " [ 2 ] the phrase is used in widely varying contexts. for example, in 1917, albert einsteinanddavid hilberthad dawn - to - dusk discussions of physics ; and they continued their debate in writing, althoughfelix kleinrecords that they " talked past each other, as happens not infrequently between simultaneously producing mathematicians. " [ 3 ]
https://en.wikipedia.org/wiki/Talking_past_each_other
game description language ( gdl ) is a specializedlogicprogramming languagedesigned bymichael genesereth. the goal of gdl is to allow the development of ai agents capable ofgeneral game playing. it is part of the general game playing project atstanford university. gdl is a tool for expressing the intricacies of game rules and dynamics in a form comprehensible to ai systems through a combination of logic - based constructs and declarative principles. in practice, gdl is often used for general game playing competitions and research endeavors. in these contexts, gdl is used to specify the rules of games that ai agents are expected to play. ai developers and researchers harness gdl to create algorithms that can comprehend and engage with games based on their rule descriptions. the use of gdl paves the way for the development of highly adaptable ai agents, capable of competing and excelling in diverse gaming scenarios. this innovation is a testament to the convergence of logic - based formalism and the world of games, opening new horizons for ai ' s potential in understanding and mastering a multitude of games. game description language equips ai with a universal key to unlock the mysteries of diverse game environments and strategies. quoted in an article innew scientist, genesereth pointed out that althoughdeep bluecan play chess at agrandmasterlevel, it is incapable of playingcheckersat all because it is a specialized game player. [ 1 ] both chess and checkers can be described in gdl. this enables general game players to be built that can play both of these games and any other game that can be described using gdl. gdl is a variant ofdatalog, and thesyntaxis largely the same. it is usually given inprefix notation. variables begin with "? ". [ 2 ] the following is the list of keywords in gdl, along with brief descriptions of their functions : a game description in gdl provides complete rules for each of the following elements of a game. facts that define the roles in a game. the following example is from a gdl description of the two - player gametic - tac - toe : rules that entail all facts about the initial game state. an example is : rules that describe each move by the conditions on the current position under which it can be taken by a player. an example is : rules that describe all facts about the next state relative to the current state and the moves taken by the
https://en.wikipedia.org/wiki/Game_Description_Language
toe : rules that entail all facts about the initial game state. an example is : rules that describe each move by the conditions on the current position under which it can be taken by a player. an example is : rules that describe all facts about the next state relative to the current state and the moves taken by the players. an example is : rules that describe the conditions under which the current state is a terminal one. an example is : the goal values for each player in a terminal state. an example is : with gdl, one can describe finite games with an arbitrary number of players. however, gdl cannot describe games that contain an element of chance ( for example, rolling dice ) or games where players have incomplete information about the current state of the game ( for example, in many card games the opponents ' cards are not visible ). gdl - ii, thegame description language for incomplete information games, extends gdl by two keywords that allow for the description of elements of chance and incomplete information : [ 3 ] the following is an example from a gdl - ii description of the card gametexas hold ' em : michael thielscher also created a further extension, gdl - iii, a general game description language withimperfect informationandintrospection, that supports the specification ofepistemic games — ones characterised by rules that depend on the knowledge of players. [ 4 ] in classical game theory, games can be formalised inextensiveandnormalforms. forcooperative game theory, games are represented using characteristic functions. some subclasses of games allow special representations in smaller sizes also known assuccinct games. some of the newer developments of formalisms and languages for the representation of some subclasses of games or representations adjusted to the needs of interdisciplinary research are summarized as the following table. [ 5 ] some of these alternative representations also encode time - related aspects : a 2016 paper " describes a multilevel algorithm compiling a general game description in gdl into an optimized reasoner in a low level language ". [ 19 ] a 2017 paper uses gdl to model the process of mediating a resolution to a dispute between two parties and presented an algorithm that uses available information efficiently to do so. [ 20 ]
https://en.wikipedia.org/wiki/Game_Description_Language
parties and presented an algorithm that uses available information efficiently to do so. [ 20 ]
https://en.wikipedia.org/wiki/Game_Description_Language
this is a list of topics aroundboolean algebraandpropositional logic.
https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics
inmathematical analysis, adomainorregionis anon - empty, connected, andopen setin atopological space. in particular, it is any non - empty connected opensubsetof thereal coordinate spacernor thecomplex coordinate spacecn. a connected open subset ofcoordinate spaceis frequently used for thedomain of a function. [ 1 ] the basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between german, french, and english works. in english, some authors use the termdomain, [ 2 ] some use the termregion, [ 3 ] some use both terms interchangeably, [ 4 ] and some define the two terms slightly differently ; [ 5 ] some avoid ambiguity by sticking with a phrase such asnon - empty connected open subset. [ 6 ] one common convention is to define adomainas a connected open set but aregionas theunionof a domain with none, some, or all of itslimit points. [ 7 ] aclosed regionorclosed domainis the union of a domain and all of its limit points. various degrees of smoothness of theboundaryof the domain are required for various properties of functions defined on the domain to hold, such as integral theorems ( green ' s theorem, stokes theorem ), properties ofsobolev spaces, and to definemeasureson the boundary and spaces oftraces ( generalized functions defined on the boundary ). commonly considered types of domains are domains withcontinuousboundary, lipschitz boundary, c1boundary, and so forth. abounded domainis a domain that isbounded, i. e., contained in some ball. bounded regionis defined similarly. anexterior domainorexternal domainis a domain whosecomplementis bounded ; sometimes smoothness conditions are imposed on its boundary. incomplex analysis, acomplex domain ( or simplydomain ) is any connected open subset of thecomplex planec. for example, the entire complex plane is a domain, as is the openunit disk, the openupper half - plane, and so forth. often, a complex domain serves as thedomain of definitionfor aholomorphic function. in the study ofseveral complex variables, the definition of a domain is extended to include any connected open subset of
https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)
domain, as is the openunit disk, the openupper half - plane, and so forth. often, a complex domain serves as thedomain of definitionfor aholomorphic function. in the study ofseveral complex variables, the definition of a domain is extended to include any connected open subset ofcn. ineuclidean spaces, one -, two -, andthree - dimensionalregions arecurves, surfaces, andsolids, whose extent are called, respectively, length, area, andvolume. definition. an open set is connected if it cannot be expressed as the sum of two open sets. an open connected set is called a domain. german : eine offene punktmenge heißt zusammenhangend, wenn man sie nicht als summe von zwei offenen punktmengen darstellen kann. eine offene zusammenhangende punktmenge heißt ein gebiet. according tohans hahn, [ 8 ] the concept of a domain as an open connected set was introduced byconstantin caratheodoryin his famous book ( caratheodory 1918 ). in this definition, caratheodory considers obviouslynon - emptydisjointsets. hahn also remarks that the word " gebiet " ( " domain " ) was occasionally previously used as asynonymofopen set. [ 9 ] the rough concept is older. in the 19th and early 20th century, the termsdomainandregionwere often used informally ( sometimes interchangeably ) without explicit definition. [ 10 ] however, the term " domain " was occasionally used to identify closely related but slightly different concepts. for example, in his influentialmonographsonelliptic partial differential equations, carlo mirandauses the term " region " to identify an open connected set, [ 11 ] [ 12 ] and reserves the term " domain " to identify an internally connected, [ 13 ] perfect set, each point of which is an accumulation point of interior points, [ 11 ] following his former mastermauro picone : [ 14 ] according to this convention, if a setais a region then itsclosureais a domain. [ 11 ]
https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)
setais a region then itsclosureais a domain. [ 11 ]
https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)
online transaction processing ( oltp ) is a type ofdatabasesystem used in transaction - oriented applications, such as many operational systems. " online " refers to the fact that such systems are expected to respond to user requests and process them in real - time ( process transactions ). the term is contrasted withonline analytical processing ( olap ) which instead focuses on data analysis ( for exampleplanningandmanagement systems ). the term " transaction " can have two different meanings, both of which might apply : in the realm of computers ordatabase transactionsit denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities ( as used by, e. g., transaction processing performance councilorcommercial transactions. [ 1 ] ) : 50oltp may use transactions of the first type to record transactions of the second type. oltp is typically contrasted toonline analytical processing ( olap ), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. whereas oltp systems process all kinds of queries ( read, insert, update and delete ), olap is generally optimized for read only and might not even support other kinds of queries. oltp also operates differently frombatch processingandgrid computing. [ 1 ] : 15 in addition, oltp is often contrasted toonline event processing ( olep ), which is based on distributedevent logsto offer strong consistency in large - scale heterogeneous systems. [ 2 ] whereas oltp is associated with short atomic transactions, olep allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time. oltp has also been used to refer to processing in which the system responds immediately to user requests. anautomated teller machine ( atm ) for a bank is an example of a commercial transaction processing application. [ 3 ] online transaction processing applications have high throughput and are insert - or update - intensive in database management. these applications are used concurrently by hundreds of users. the key goals of oltp applications are availability, speed, concurrency and recoverability ( durability ). [ 4 ] reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how oltp makes things simpler for businesses. however, like many modern online information technology solutions, some
https://en.wikipedia.org/wiki/Online_transaction_processing
. the key goals of oltp applications are availability, speed, concurrency and recoverability ( durability ). [ 4 ] reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how oltp makes things simpler for businesses. however, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost - benefit analysis of an online transaction processing system. an oltp system is an accessible data processing system in today ' s enterprises. some examples of oltp systems include order entry, retail sales, and financial transaction systems. [ 5 ] online transaction processing systems increasingly require support for transactions that span a network and may include more than one company. for this reason, modern online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network. in large applications, efficient oltp may depend on sophisticated transaction management software ( such as ibmcics ) and / ordatabaseoptimization tactics to facilitate the processing of large numbers of concurrent updates to an oltp - oriented database. for even more demanding decentralized database systems, oltp brokering programs can distribute transaction processing among multiple computers on anetwork. oltp is often integrated intoservice - oriented architecture ( soa ) andweb services. online transaction processing ( oltp ) involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. as of today, most organizations use a database management system to support oltp. oltp is carried in a client - server system. online transaction process concerns about concurrency and atomicity. concurrency controls guarantee that two users accessing the same data in the database system will not be able to change that data or the user has to wait until the other user has finished processing, before changing that piece of data. atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. that is, if any steps between the transaction fail, all other steps must fail also. [ 6 ] to build an oltp system, a designer must know that the large number of concurrent users does not interfere with the system ' s performance. to increase the performance of an oltp system, a designer must avoid excessive use of indexes and clusters. the following elements are crucial for the performance of oltp systems : [ 4 ]
https://en.wikipedia.org/wiki/Online_transaction_processing
to increase the performance of an oltp system, a designer must avoid excessive use of indexes and clusters. the following elements are crucial for the performance of oltp systems : [ 4 ]
https://en.wikipedia.org/wiki/Online_transaction_processing
thecute cat theory of digital activismis atheoryconcerninginternet activism, web censorship, and " cute cats " ( a term used for any low - value, but popular online activity ) developed byethan zuckermanin 2008. [ 1 ] [ 2 ] it posits that most people are not interested in activism ; instead, they want to use thewebfor mundane activities, including surfing forpornographyandlolcats ( " cute cats " ). [ 3 ] the tools that they develop for that ( such asfacebook, flickr, blogger, twitter, and similar platforms ) are very useful tosocial movementactivists because they may lack resources to develop dedicated tools themselves. [ 3 ] this, in turn, makes theactivistsmore immune to reprisals by governments than if they were using a dedicated activism platform, because shutting down a popular public platform provokes a larger public outcry than shutting down an obscure one. [ 3 ] zuckerman states that " web 1. 0was invented to allow physicists to share research papers. web 2. 0was created to allow people to share pictures of cute cats. " [ 3 ] zuckerman says that if a tool has " cute cat " purposes, and is widely used for low - value purposes, it can be and likely is used for online activism, too. [ 3 ] if the government chooses to shut down such generic tools, it will hurt people ' s ability to " look at cute cats online ", spreading dissent and encouraging the activists ' cause. [ 2 ] [ 3 ] according to zuckerman, internet censorship in the people ' s republic of china, which relies on its own, self - censored, web 2. 0 sites, is able to circumvent the cute - cat problem becausethe governmentis able to provide people with access to cute - cat content on domestic, self - censoredsites while blocking access to western sites, which are less popular in china than in many other places worldwide. [ 3 ] [ 4 ] " sufficiently usable read / write platforms will attract porn and activists. if there ' s no porn, the tool doesn ' t work. if there are no activists, it doesn ' t work well, " zuckerman has stated. [ 3 ]
https://en.wikipedia.org/wiki/Cute_cat_theory_of_digital_activism
tool doesn ' t work. if there are no activists, it doesn ' t work well, " zuckerman has stated. [ 3 ]
https://en.wikipedia.org/wiki/Cute_cat_theory_of_digital_activism
the followingoutlineis provided as an overview of and topical guide to cryptography : cryptography ( orcryptology ) – practice and study of hidinginformation. modern cryptography intersects the disciplines ofmathematics, computer science, andengineering. applications of cryptography includeatm cards, computer passwords, andelectronic commerce. list of cryptographers
https://en.wikipedia.org/wiki/Topics_in_cryptography
media intelligenceusesdata mininganddata scienceto analyze public, socialand editorialmedia content. it refers to marketing systems that synthesize billions ofonline conversationsinto relevant information. this allow organizations to measure and manage content performance, understand trends, and drive communications andbusiness strategy. media intelligence can includesoftware as a serviceusingbig dataterminology. [ 1 ] this includes questions about messaging efficiency, share of voice, audience geographical distribution, message amplification, influencerstrategy, journalist outreach, creative resonance, and competitor performance in all these areas. media intelligence differs frombusiness intelligencein that it uses and analyzes data outside companyfirewalls. examples of that data areuser - generated contenton social media sites, blogs, comment fields, and wikis etc. it may also include other public data sources likepress releases, news, blogs, legal filings, reviews and job postings. media intelligence may also include competitive intelligence, wherein information that is gathered from publicly available sources such as social media, press releases, and news announcements are used to better understand the strategies and tactics being deployed by competing businesses. [ 2 ] media intelligence is enhanced by means of emerging technologies likeambient intelligence, machine learning, semantic tagging, natural language processing, sentiment analysisandmachine translation. different media intelligence platforms use different technologies formonitoring, curating content, engaging with content, data analysis and measurement of communications and marketing campaign success. these technology providers may obtain content by scraping content directly from websites or by connecting to the api provided by social media, or other content platforms that are created for 3rd party developers to develop their own applications and services that access data. technology companies may also get data from a data reseller. some social media monitoring and analytics companies use calls to data providers each time an end - user develops a query. others archive and index social media posts to provide end users with on - demand access to historical data and enable methodologies and technologies leveraging network and relational data. additional monitoring companies use crawlers and spidering technology to find keyword references, known assemantic analysisornatural language processing. basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it. [ 3 ]
https://en.wikipedia.org/wiki/Media_intelligence
on a large scale and analyzing the results to make sense out of it. [ 3 ]
https://en.wikipedia.org/wiki/Media_intelligence
apolymorphic engine ( sometimes calledmutation engineormutating engine ) is asoftware componentthat usespolymorphic codeto alter thepayloadwhile preserving the same functionality. polymorphicenginesare used almost exclusively inmalware, with the purpose of being harder forantivirus softwareto detect. they do so either byencryptingorobfuscatingthe malware payload. one common deployment is afile binderthat weaves malware into normalfiles, such as office documents. since this type of malware is usually polymorphic, it is also known as apolymorphic packer. the engine of thevirutbotnetis an example of a polymorphic engine. [ 1 ]
https://en.wikipedia.org/wiki/Polymorphic_engine
incomputer science, instruction schedulingis acompiler optimizationused to improveinstruction - level parallelism, which improves performance on machines withinstruction pipelines. put more simply, it tries to do the following without changing the meaning of the code : the pipeline stalls can be caused by structural hazards ( processor resource limit ), data hazards ( output of one instruction needed by another instruction ) and control hazards ( branching ). instruction scheduling is typically done on a singlebasic block. in order to determine whether rearranging the block ' s instructions in a certain way preserves the behavior of that block, we need the concept of adata dependency. there are three types of dependencies, which also happen to be the threedata hazards : technically, there is a fourth type, read after read ( rar or " input " ) : both instructions read the same location. input dependence does not constrain the execution order of two statements, but it is useful in scalar replacement of array elements. to make sure we respect the three types of dependencies, we construct a dependency graph, which is adirected graphwhere each vertex is an instruction and there is an edge from i1to i2if i1must come before i2due to a dependency. if loop - carried dependencies are left out, the dependency graph is adirected acyclic graph. then, anytopological sortof this graph is a valid instruction schedule. the edges of the graph are usually labelled with thelatencyof the dependence. this is the number of clock cycles that needs to elapse before the pipeline can proceed with the target instruction without stalling. the simplest algorithm to find a topological sort is frequently used and is known aslist scheduling. conceptually, it repeatedly selects a source of the dependency graph, appends it to the current instruction schedule and removes it from the graph. this may cause other vertices to be sources, which will then also be considered for scheduling. the algorithm terminates if the graph is empty. to arrive at a good schedule, stalls should be prevented. this is determined by the choice of the next instruction to be scheduled. a number of heuristics are in common use : instruction scheduling may be done either before or afterregister allocationor both before and after it. the advantage of doing it before register allocation is that this results in maximum parallelism. the disadvantage of doing it before register allocation is that this can result in the register allocator needing
https://en.wikipedia.org/wiki/Superblock_scheduling
##stics are in common use : instruction scheduling may be done either before or afterregister allocationor both before and after it. the advantage of doing it before register allocation is that this results in maximum parallelism. the disadvantage of doing it before register allocation is that this can result in the register allocator needing to use a number of registers exceeding those available. this will cause spill / fill code to be introduced, which will reduce the performance of the section of code in question. if the architecture being scheduled has instruction sequences that have potentially illegal combinations ( due to a lack of instruction interlocks ), the instructions must be scheduled after register allocation. this second scheduling pass will also improve the placement of the spill / fill code. if scheduling is only done after register allocation, then there will be false dependencies introduced by the register allocation that will limit the amount of instruction motion possible by the scheduler. there are several types of instruction scheduling : thegnu compiler collectionis one compiler known to perform instruction scheduling, using the - march ( both instruction set and scheduling ) or - mtune ( only scheduling ) flags. it uses descriptions of instruction latencies and what instructions can be run in parallel ( or equivalently, which " port " each use ) for each microarchitecture to perform the task. this feature is available to almost all architectures that gcc supports. [ 2 ] until version 12. 0. 0, the instruction scheduling inllvm / clang could only accept a - march ( calledtarget - cpuin llvm parlance ) switch for both instruction set and scheduling. version 12 adds support for - mtune ( tune - cpu ) for x86 only. [ 3 ] sources of information on latency and port usage include : llvm ' sllvm - exegesisshould be usable on all machines, especially to gather information on non - x86 ones. [ 6 ]
https://en.wikipedia.org/wiki/Superblock_scheduling
pruningis adata compressiontechnique inmachine learningandsearch algorithmsthat reduces the size ofdecision treesby removing sections of the tree that are non - critical and redundant to classify instances. pruning reduces the complexity of the finalclassifier, and hence improves predictive accuracy by the reduction ofoverfitting. one of the questions that arises in a decision tree algorithm is the optimal size of the final tree. a tree that is too large risksoverfittingthe training data and poorly generalizing to new samples. a small tree might not capture important structural information about the sample space. however, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. this problem is known as thehorizon effect. a common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information. [ 1 ] pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by across - validationset. there are many techniques for tree pruning that differ in the measurement that is used to optimize performance. pruning processes can be divided into two types ( pre - and post - pruning ). pre - pruningprocedures prevent a complete induction of the training set by replacing a stop ( ) criterion in the induction algorithm ( e. g. max. tree depth or information gain ( attr ) > mingain ). pre - pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. prepruning methods share a common problem, the horizon effect. this is to be understood as the undesired premature termination of the induction by the stop ( ) criterion. post - pruning ( or just pruning ) is the most common way of simplifying trees. here, nodes and subtrees are replaced with leaves to reduce complexity. pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. it may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall. the procedures are differentiated on the basis of their approach in the tree ( top - down or bottom - up ). these procedures start at the last node in the tree ( the lowest point
https://en.wikipedia.org/wiki/Pruning_(algorithm)
that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall. the procedures are differentiated on the basis of their approach in the tree ( top - down or bottom - up ). these procedures start at the last node in the tree ( the lowest point ). following recursively upwards, they determine the relevance of each individual node. if the relevance for the classification is not given, the node is dropped or replaced by a leaf. the advantage is that no relevant sub - trees can be lost with this method. these methods include reduced error pruning ( rep ), minimum cost complexity pruning ( mccp ), or minimum error pruning ( mep ). in contrast to the bottom - up method, this method starts at the root of the tree. following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. by pruning the tree at an inner node, it can happen that an entire sub - tree ( regardless of its relevance ) is dropped. one of these representatives is pessimistic error pruning ( pep ), which brings quite good results with unseen items. one of the simplest forms of pruning is reduced error pruning. starting at the leaves, each node is replaced with its most popular class. if the prediction accuracy is not affected then the change is kept. while somewhat naive, reduced error pruning has the advantage ofsimplicity and speed. cost complexity pruning generates a series of treest0 … tm { \ displaystyle t _ { 0 } \ dots t _ { m } } wheret0 { \ displaystyle t _ { 0 } } is the initial tree andtm { \ displaystyle t _ { m } } is the root alone. at stepi { \ displaystyle i }, the tree is created by removing a subtree from treei−1 { \ displaystyle i - 1 } and replacing it with a leaf node with value chosen as in the tree building algorithm. the subtree that is removed is chosen as follows : the functionprune ( t, t ) { \ displaystyle \ operatorname { prune } ( t, t ) } defines the tree obtained by pruning the subtreest { \ displaystyle t } from the treet { \ displaystyle t }. once the series of trees has been created, the best tree is chosen by
https://en.wikipedia.org/wiki/Pruning_(algorithm)
, t ) { \ displaystyle \ operatorname { prune } ( t, t ) } defines the tree obtained by pruning the subtreest { \ displaystyle t } from the treet { \ displaystyle t }. once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross - validation. pruning could be applied in acompression schemeof a learning algorithm to remove the redundant details without compromising the model ' s performances. in neural networks, pruning removes entire neurons or layers of neurons.
https://en.wikipedia.org/wiki/Pruning_(algorithm)
acceptoften refers to : acceptcan also refer to :
https://en.wikipedia.org/wiki/Accept_(disambiguation)
defense in depthis a concept used ininformation securityin which multiple layers of security controls ( defense ) are placed throughout aninformation technology ( it ) system. its intent is to provideredundancyin the event asecurity controlfails or a vulnerability is exploited that can cover aspects ofpersonnel, procedural, technicalandphysicalsecurity for the duration of the system ' s life cycle. the idea behind the defense in depth approach is to defend a system against any particular attack using several independent methods. [ 1 ] it is a layering tactic, conceived [ 2 ] by thenational security agency ( nsa ) as a comprehensive approach to information and electronic security. [ 3 ] [ 4 ] an insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, andnetwork security, host - based security, andapplication securityforming the outermost layers of the onion. [ 5 ] both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy. [ 6 ] defense in depth can be divided into three areas : physical, technical, and administrative. [ 7 ] physical controls [ 3 ] are anything that physically limits or prevents access to it systems. examples of physical defensive security are : fences, guards, dogs, andcctvsystems. technical controls are hardware or software whose purpose is to protect systems and resources. examples of technical controls would be disk encryption, file integrity software, and authentication. hardware technical controls differ from physical controls in that they prevent access to the contents of a system, but not the physical systems themselves. administrative controls are the organization ' s policies and procedures. their purpose is to ensure that there is proper guidance available in regard to security and that regulations are met. they include things such as hiring practices, data handling procedures, and security requirements. using more than one of the following layers constitutes an example of defense in depth.
https://en.wikipedia.org/wiki/Defense_in_depth_(computing)
dynamic program analysisis the act ofanalyzing softwarethat involves executing aprogram – as opposed tostatic program analysis, which does not execute it. analysis can focus on different aspects of the software including but not limited to : behavior, test coverage, performanceandsecurity. to be effective, the target program must be executed with sufficient test inputs [ 1 ] to address the ranges of possible inputs and outputs. software testingmeasures, such ascode coverage, and tools such asmutation testing, are used to identify where testing is inadequate. functional testing includes relatively commonprogrammingtechniques such asunit testing, integration testingandsystem testing. [ 2 ] computing thecode coverageof a test identifies code that is not tested ; not covered by a test. although this analysis identifies code that is not tested it does not determine whether tested coded isadequatelytested. code can be executed even if the tests do not actually verify correct behavior. dynamic testing involves executing a program on a set of test cases. fuzzing is a testing technique that involves executing a program on a wide variety of inputs ; often these inputs are randomly generated ( at least in part ). gray - box fuzzersuse code coverage to guide input generation. dynamic symbolic execution ( also known asdseor concolic execution ) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using aconstraint solver ( generally, ansmt solver ) to generate new inputs that would cause the program to take a different control - flow path, thus increasing code coverage of the test suite. [ 3 ] dse can be considered a type offuzzing ( " white - box " fuzzing ). dynamic data - flow analysis tracks the flow of information fromsourcestosinks. forms of dynamic data - flow analysis include dynamic taint analysis and evendynamic symbolic execution. [ 4 ] [ 5 ] daikonis an implementation of dynamic invariant detection. daikon runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions. dynamic analysis can be used to detect security problems. for a given subset of a program ’ s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. the reduced program is called a “ slice ” and is a faithful representation of the original program within the domain of the specified behavior
https://en.wikipedia.org/wiki/Dynamic_program_analysis
dynamic analysis can be used to detect security problems. for a given subset of a program ’ s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. the reduced program is called a “ slice ” and is a faithful representation of the original program within the domain of the specified behavior subset. generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data - flow algorithm. these slices are usually used by developers during debugging to locate the source of errors. mostperformance analysis toolsuse dynamic program analysis techniques. [ citation needed ] most dynamic analysis involvesinstrumentationor transformation. since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem. dyninst is a runtime code - patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. dyninst does not requiresource codeor recompilation in general, however, non - stripped executables and executables with debugging symbols are easier to instrument. iroh. jsis a runtime code analysis library forjavascript. it keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program ' s execution behavior.
https://en.wikipedia.org/wiki/Dynamic_program_analysis
acrash - only softwareis acomputer programthat handle failures by simply restarting, without attempting any sophisticated recovery. [ 1 ] correctly written components of crash - only software canmicrorebootto a known - good state without the help of a user. since failure - handling and normal startup use the same methods, this can increase the chance that bugs in failure - handling code will be noticed, [ clarification needed ] except when there are leftover artifacts, such asdata corruptionfrom a severe failure, that don ' t occur during normal startup. [ citation needed ] crash - only software also has benefits for end - users. all too often, applications do not save their data and settings while running, only at the end of their use. for example, word processorsusually save settings when they are closed. a crash - only application is designed to save all changed user settings soon after they are changed, so that thepersistent statematches that of the running machine. no matter how an application terminates ( be it a clean close or the sudden failure of a laptop battery ), the state will persist. thissoftware - engineering - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Crash-only_software
trackrwas a commercialkey finderthat assisted in the tracking of lost belongings and devices. [ 1 ] trackr was produced by the company phone halo [ 2 ] and was inspired by the founders ' losing their keys on a beach during a surfing trip. [ 3 ] the founders ofphone halobegan working on trackr in 2009. in 2010, they founded the company and launched the product. [ 4 ] in winter 2018, trackr rebranded itself toadero, as part of changing its focus to other uses for its tracking technology, taking trackr beyond the bluetooth fobs that had been the core of its service. [ 5 ] trackr shut down its services and removed its apps in august 2021. [ 6 ] the device contains a lithium battery that needs to be changed about once a year by the user. it communicates its current location viabluetooth4. 0, to an android 4. 4 + or ios 8. 0 + mobile device on which the trackr app is installed and running. this feature is referred to as " crowd locate ", since each device will report its location to all other trackr devices in range, including those that are neither owned nor registered by the user. this feature is useful because the app must be installed and running on a nearby bluetooth enabled device for any device ' s location to be relayed. as of august 2017, over 5 million trackr devices had been sold. [ 3 ] as of august 2021, the official website stated that the manufacturer has discontinued app support for both apple and android devices. fortrackr bravo, the producer published the following data as of august 2017 : [ 7 ]
https://en.wikipedia.org/wiki/TrackR
incomputer programming, explicit parallelismis the representation of concurrent computations using primitives in the form of operators, function calls or special - purpose directives. [ 1 ] most parallel primitives are related to process synchronization, communication and process partitioning. [ 2 ] as they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead. the advantage of explicitparallel programmingis increased programmer control over the computation. a skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. however, programming with explicit parallelism is often difficult, especially for non - computing specialists, because of the extra work and skill involved in developing it. in some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known asimplicit parallelism. some of the programming languages that support explicit parallelism are : thiscomputer sciencearticle is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Explicit_parallelism
journalology ( also known aspublication science ) is the scholarly study of all aspects of theacademic publishingprocess. [ 1 ] [ 2 ] the field seeks to improve the quality of scholarly research by implementingevidence - based practicesin academic publishing. [ 3 ] the term " journalology " was coined bystephen lock, the formereditor - in - chiefofthe bmj. the first peer review congress, held in 1989 inchicago, illinois, is considered a pivotal moment in the founding of journalology as a distinct field. [ 3 ] the field of journalology has been influential in pushing for studypre - registrationin science, particularly inclinical trials. clinical trial registrationis now expected in most countries. [ 3 ] journalology researchers also work to reform thepeer reviewprocess. the earliest scientific journals were founded in the seventeenth century. while most early journals usedpeer review, peer review did not become common practice in medical journals until afterworld war ii. [ 4 ] the scholarly publishing process ( including peer review ) did not arise by scientific means and still suffers from problems with reliability ( consistency and dependability ), [ 5 ] such as a lack of uniform standards and validity ( well - founded, efficacious ). [ 6 ] [ 7 ] attempts to reform the academic publishing practice began to gain traction in the late twentieth century. [ 8 ] the field of journalology was formally established in 1989. [ 3 ]
https://en.wikipedia.org/wiki/Journalology
magnetic logicisdigital logicmade using the non - linear properties of woundferrite cores. [ 1 ] magnetic logic represents 0 and 1 by magnetising cores clockwise or anticlockwise. [ 2 ] examples of magnetic logic includecore memory. also, and, or, not and clocked shift logic gates can be constructed using appropriate windings, and the use of diodes. a complete computer called thealwac 800was constructed using magnetic logic, but it was not commercially successful. theelliott 803computer used a combination of magnetic cores ( for logic function ) and germanium transistors ( as pulse amplifiers ) for its cpu. it was a commercial success. william f. steagall of thesperry - rand corporationdeveloped the technology in an effort to improve the reliability of computers. in his patent application, [ 3 ] filed in 1954, he stated : " where, as here, reliability of operation is a factor of prime importance, vacuum tubes, even though acceptable for most present - day electronic applications, are faced with accuracy requirements of an entirely different order of magnitude. for example, if two devices each having 99. 5 % reliability response are both utilized in a combined relationship in a given device, that device will have an accuracy or reliability factor of. 995 ×. 995 = 99 %. if ten such devices are combined, the factor drops to 95. 1 %. if, however, 500 such units are combined, the reliability factor of the device drops to 8. 1 %, and for a thousand, to 0. 67 %. it will thus be seen that even though the reliability of operation of individual vacuum tubes may be very much above 99. 95 %, where many thousands of units are combined, as in the large computers, the reliability factor of each unit must be extremely high to combine to produce an error free device. in practice of course such an ideal can only be approached. magnetic amplifiers of the type here described meet the necessary requirements of reliability of performance for the combinations discussed. " magnetic logic was able to achieve switching speeds of about 1mhz but was overtaken by semiconductor based electronics which was able to switch much faster. solid state semiconductors were able to increase their density according tomoore ' s law, and thus proved more effective as ic technology developed. magnetic logic has advantages in that it is not volatile, it may be powered down without losing its state. [ 1 ]
https://en.wikipedia.org/wiki/Magnetic_logic
state semiconductors were able to increase their density according tomoore ' s law, and thus proved more effective as ic technology developed. magnetic logic has advantages in that it is not volatile, it may be powered down without losing its state. [ 1 ]
https://en.wikipedia.org/wiki/Magnetic_logic
rigetti computing, inc. is aberkeley, california - based developer of superconducting quantumintegrated circuitsused forquantum computers. rigetti also develops a cloud platform called forest that enables programmers to write quantum algorithms. [ 2 ] rigetti computing was founded in 2013 bychad rigetti, a physicist with a background in quantum computers fromibm, and studied undermichel devoret. [ 2 ] [ 3 ] the company emerged from startup incubatory combinatorin 2014 as a so - called " spaceshot " company. [ 4 ] [ 5 ] later that year, rigetti also participated in the alchemist accelerator, a venture capital programme. [ 5 ] by february 2016, rigetti created its firstquantum processor, a three - qubitchip made using aluminum circuits on a silicon wafer. [ 6 ] that same year, rigetti raisedseries afunding of us $ 24 million in a round led byandreessen horowitz. in november, the company secured series b funding of $ 40 million in a round led by investment firm vy capital, along with additional funding fromandreessen horowitzand other investors. y combinator also participated in both rounds. [ 5 ] by spring of 2017, rigetti had advanced to testing eight - qubit quantum computers. [ 3 ] in june, the company announced the release of forest 1. 0, a quantum computing platform designed to enable developers to create quantum algorithms. [ 2 ] this was a major milestone. in october 2021, rigetti announced plans to go public via aspac merger, with estimated valuation of around us $ 1. 5 billion. [ 7 ] [ 8 ] this deal was expected to raise an additional us $ 458 million, bringing the total funding to us $ 658 million. [ 7 ] the fund will be used to accelerate the company ' s growth, including scaling its quantum processors from 80 qubits to 1, 000 qubits by 2024, and to 4, 000 by 2026. [ 9 ] the spac deal closed on 2 march 2022, and rigetti began trading on the nasdaq under the ticker symbol rgti. [ 10 ] in december 2022, subodh kulkarni became president and ceo of the company. [ 11 ] in july 2023 rigetti launched a single - chip 84qubitquantum processorthat can scale to even larger systems. [ 12 ] rigetti computing
https://en.wikipedia.org/wiki/Rigetti_Computing
symbol rgti. [ 10 ] in december 2022, subodh kulkarni became president and ceo of the company. [ 11 ] in july 2023 rigetti launched a single - chip 84qubitquantum processorthat can scale to even larger systems. [ 12 ] rigetti computing is a full - stack quantum computing company, a term that indicates that the company designs and fabricates quantum chips, integrates them with a controlling architecture, and develops software for programmers to use to build algorithms for the chips. [ 13 ] the company hosts a cloud computing platform called forest, which gives developers access to quantum processors so they can write quantum algorithms for testing purposes. the computing platform is based on a custom instruction language the company developed calledquil, which stands for quantum instruction language. quil facilitates hybrid quantum / classical computing, and programs can be built and executed using open sourcepythontools. [ 13 ] [ 14 ] as of june 2017, the platform allows coders to write quantum algorithms for a simulation of a quantum chip with 36 qubits. [ 2 ] the company operates a rapid prototyping fabrication ( " fab " ) lab called fab - 1, designed to quickly create integrated circuits. lab engineers design and generate experimental designs for 3d - integrated quantum circuits for qubit - based quantum hardware. [ 13 ] the company was recognized in 2016 byx - prizefounderpeter diamandisas being one of the three leaders in the quantum computing space, along with ibm andgoogle. [ 15 ] mit technology reviewnamed the company one of the 50 smartest companies of 2017. [ 16 ] rigetti computing is headquartered in berkeley, california, where it hosts developmental systems and cooling equipment. [ 15 ] the company also operates its fab - 1 manufacturing facility in nearby fremont. [ 2 ]
https://en.wikipedia.org/wiki/Rigetti_Computing
incombinatorialmathematics, alarge setofpositive integers is one such that theinfinite sumof the reciprocals diverges. asmall setis any subset of the positive integers that is not large ; that is, one whose sum of reciprocals converges. large sets appear in themuntz – szasz theoremand in theerdos conjecture on arithmetic progressions. paul erdosconjecturedthat all large sets contain arbitrarily longarithmetic progressions. he offered a prize of $ 3000 for a proof, more than for any of hisother conjectures, and joked that this prize offer violated the minimum wage law. [ 1 ] the question is still open. it is not known how to identify whether a given set is large or small in general. as a result, there are many sets which are not known to be either large or small.
https://en.wikipedia.org/wiki/Large_set_(combinatorics)
mathematical puzzlesmake up an integral part ofrecreational mathematics. they have specific rules, but they do not usually involve competition between two or more players. instead, to solve such apuzzle, the solver must find a solution that satisfies the given conditions. mathematical puzzles require mathematics to solve them. logic puzzlesare a common type of mathematical puzzle. conway ' s game of lifeandfractals, as two examples, may also be considered mathematical puzzles even though the solver interacts with them only at the beginning by providing a set of initial conditions. after these conditions are set, the rules of the puzzle determine all subsequent changes and moves. many of the puzzles are well known because they were discussed bymartin gardnerin his " mathematical games " column in scientific american. mathematical puzzles are sometimes used to motivate students in teaching elementary schoolmath problemsolving techniques. [ 1 ] creative thinking – or " thinking outside the box " – often helps to find the solution. the fields ofknot theoryandtopology, especially their non - intuitive conclusions, are often seen as a part of recreational mathematics.
https://en.wikipedia.org/wiki/Mathematical_puzzle
autocephaly recognized by some autocephalous churchesde jure : autocephaly and canonicity recognized by constantinople and 3 other autocephalous churches : spiritual independence recognized by georgian orthodox church : semi - autonomous : in theeastern orthodox church, catholic church, [ 1 ] and in the teachings of thechurch fatherswhich undergirds thetheologyof those communions, economyoroeconomy ( greek : οικονομια, oikonomia ) has several meanings. [ 2 ] the basic meaning of the word is " handling " or " disposition " or " management " of a thing, or more literally " housekeeping ", usually assuming or implyinggoodorprudenthandling ( as opposed topoorhandling ) of the matter at hand. in short, economiais a discretionary deviation from the letter of the law in order to adhere to the spirit of thelawandcharity. this is in contrast tolegalism, orakribia ( greek : ακριβεια ), which is strict adherence to the letter of the law of the church. the divine economy, in eastern orthodoxy, not only refers to god ' s actions to bring about the world ' ssalvationandredemption, but toallof god ' s dealings with, and interactions with, the world, including the creation. [ 3 ] [ verification needed ] according tolossky, theology ( literally, " words about god " or " teaching about god " ) was concerned with all that pertains to god alone, in himself, i. e. the teaching on thetrinity, thedivine attributes, and so on ; but it was not concerned with anything pertaining to the creation or the redemption. lossky writes : " the distinction betweenοικονομια [ economy ] andθεολογια [ theology ] [... ] remains common to most of the greekfathersand to all of thebyzantinetradition. θεολογια [... ] means, in the fourth century, everything which can be said of god considered in himself, outside of his creative and redemptive economy. to reach this ' theology ' properly so - called, one therefore must go beyond [... ] god as creator of the universe, in order to be able to extricate the notion of the trinity from the cosmological implications proper to
https://en.wikipedia.org/wiki/Economy_(religion)
considered in himself, outside of his creative and redemptive economy. to reach this ' theology ' properly so - called, one therefore must go beyond [... ] god as creator of the universe, in order to be able to extricate the notion of the trinity from the cosmological implications proper to the ' economy. ' " [ 3 ] theecumenical patriarchateconsiders that through " extreme oikonomia [ economy ] ", those who arebaptizedin theoriental orthodox, roman catholic, lutheran, old catholic, moravian, anglican, methodist, reformed, presbyterian, church of the brethren, assemblies of god, orbaptisttraditions can be received into the eastern orthodox church through the sacrament ofchrismationand not throughre - baptism. [ 4 ] in thecanon law of the eastern orthodox church, the notions ofakriveiaandeconomia ( economy ) also exist. akriveia, which is harshness, " is the strict application ( sometimes even extension ) of thepenancegiven to an unrepentant and habitual offender. " economia, which is sweetness, " is a judicious relaxation of the penance when the sinner shows remorse andrepentance. " [ 5 ] according to the catechism of the catholic church : [ 6 ] the fathers of the church distinguish between theology ( theologia ) and economy ( oikonomia ). " theology " refers to the mystery of god ' s inmost life within the blessed trinity and " economy " to all the works by which god reveals himself and communicates his life. through the oikonomia the theologia is revealed to us ; but conversely, the theologia illuminates the whole oikonomia. god ' s works reveal who he is in himself ; the mystery of his inmost being enlightens our understanding of all his works. so it is, analogously, among human persons. a person discloses himself in his actions, and the better we know a person, the better we understand his actions.
https://en.wikipedia.org/wiki/Economy_(religion)
inprogramming language theory, semanticsis the rigorous mathematical study of the meaning ofprogramming languages. [ 1 ] semantics assignscomputationalmeaning to validstringsin aprogramming language syntax. it is closely related to, and often crosses over with, thesemantics of mathematical proofs. semanticsdescribes the processes a computer follows whenexecutinga program in that specific language. this can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certainplatform, thereby creating amodel of computation. in 1967, robert w. floydpublished the paperassigning meanings to programs ; his chief aim was " a rigorous standard for proofs about computer programs, includingproofs of correctness, equivalence, and termination ". [ 2 ] [ 3 ] floyd further wrote : [ 2 ] a semantic definition of a programming language, in our approach, is founded on asyntacticdefinition. it must specify which of the phrases in a syntactically correct program representcommands, and whatconditionsmust be imposed on an interpretation in the neighborhood of each command. in 1969, tony hoarepublished a paper onhoare logicseeded by floyd ' s ideas, now sometimes collectively calledaxiomatic semantics. [ 4 ] [ 5 ] in the 1970s, the termsoperational semanticsanddenotational semanticsemerged. [ 5 ] the field of formal semantics encompasses all of the following : it has close links with other areas ofcomputer sciencesuch asprogramming language design, type theory, compilersandinterpreters, program verificationandmodel checking. there are many approaches to formal semantics ; these belong to three major classes : apart from the choice between denotational, operational, or axiomatic approaches, most variations in formal semantic systems arise from the choice of supporting mathematical formalism. [ citation needed ] some variations of formal semantics include the following : for a variety of reasons, one might wish to describe the relationships between different formal semantics. for example : it is also possible to relate multiple semantics throughabstractionsvia the theory ofabstract interpretation. [ citation needed ]
https://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
to relate multiple semantics throughabstractionsvia the theory ofabstract interpretation. [ citation needed ]
https://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
anillegal opcode, also called anunimplemented operation, [ 1 ] unintended opcode [ 2 ] orundocumented instruction, is aninstructionto acputhat is not mentioned in any official documentation released by the cpu ' s designer or manufacturer, which nevertheless has an effect. illegal opcodes were common on older cpus designed during the 1970s, such as themos technology6502, intel8086, and thezilogz80. unlike modern processors, those older processors have a very limited transistor budget, and thus to save space their designers often omitted circuitry to detect invalid opcodes and generate atrapto an error handler. the operation of many of these opcodes happens as aside effectof the wiring oftransistorsin the cpu, and usually combines functions of the cpu that were not intended to be combined. on old and modern processors, there are also instructions intentionally included in the processor by the manufacturer, but that are not documented in any official specification. while most accidental illegal instructions have useless or even highly undesirable effects ( such as crashing the computer ), some can have useful functions in certain situations. such instructions were sometimes exploited incomputer gamesof the 1970s and 1980s to speed up certain time - critical sections. another common use was in the ongoing battle betweencopy protectionimplementations andcracking. here, they were a form ofsecurity through obscurity, and their secrecy usually did not last very long. a danger associated with the use of illegal instructions was that, given the fact that the manufacturer does not guarantee their existence and function, they might disappear or behave differently with any change of the cpu internals or any new revision of the cpu, rendering programs that use them incompatible with the newer revisions. for example, a number of olderapple iigames did not work correctly on the newerapple iic, because the latter used a newer cpu revision – 65c02 – that did away with illegal opcodes. later cpus, such as the80186, 80286, 68000and its descendants, do not have illegal opcodes that are widely known / used. ideally, the cpu will behave in a well - defined way when it finds an unknown opcode in the instruction stream, such as triggering a certainexceptionorfaultcondition. theoperating system ' s exception or fault handler will then usually terminate the application that caused the fault, unless the program had previously established its own exception /
https://en.wikipedia.org/wiki/Unintended_instructions
cpu will behave in a well - defined way when it finds an unknown opcode in the instruction stream, such as triggering a certainexceptionorfaultcondition. theoperating system ' s exception or fault handler will then usually terminate the application that caused the fault, unless the program had previously established its own exception / fault handler, in which case that handler would receive control. another, less common way of handling illegal instructions is by defining them to do nothing except taking up time and space ( equivalent to the cpu ' s officialnopinstruction ) ; this method is used by thetms9900and65c02processors, among others. alternatively, unknown instructions can be emulated in software ( e. g. loadall ), or even " new " pseudo - instructions can be implemented. somebioses, memory managers, and operating systems take advantage of this, for example, to let v86 tasks communicate with the underlying system, i. e. bop ( from " bios operation " ) utilized by the windowsntvdm. [ 3 ] in spite of intel ' s guarantee against such instructions, research using techniques such asfuzzinguncovered a vast number of undocumented instructions in x86 processors as late as 2018. [ 4 ] some of these instructions are shared across processor manufacturers, indicating that intel andamdare both aware of the instruction and its purpose, despite it not appearing in any official specification. other instructions are specific to manufacturers or specific product lines. the purpose of the majority of x86 undocumented instructions is unknown. today, the details of these instructions are mainly of interest for exactemulationof older systems.
https://en.wikipedia.org/wiki/Unintended_instructions
indigital electronics, anand ( not and ) gateis alogic gatewhich produces an output which is false only if all its inputs are true ; thus its output iscomplementto that of anand gate. a low ( 0 ) output results only if all the inputs to the gate are high ( 1 ) ; if any input is low ( 0 ), a high ( 1 ) output results. a nand gate is made using transistors and junction diodes. byde morgan ' s laws, a two - input nand gate ' s logic may be expressed = { \ displaystyle { \ overline { a } } \ lor { \ overline { b } } = { \ overline { a \ cdot b } } }, making a nand gate equivalent toinvertersfollowed by anor gate. the nand gate is significant because anyboolean functioncan be implemented by using a combination of nand gates. this property is called " functional completeness ". it shares this property with thenor gate. digital systems employing certain logic circuits take advantage of nand ' s functional completeness. nand gates with two or more inputs are available asintegrated circuitsintransistor – transistor logic, cmos, and otherlogic families. there are three symbols for nand gates : themil / ansisymbol, theiecsymbol and the deprecateddinsymbol sometimes found on old schematics. the ansi symbol for the nand gate is a standard and gate with an inversion bubble connected. the functionnand ( a1, a2,..., an ) islogically equivalenttonot ( a1anda2and... andan ). one way of expressing a nand b { \ displaystyle { \ overline { a \ land b } } }, where the symbol∧ { \ displaystyle { \ land } } signifies and and the bar signifies the negation of the expression under it : in essence, simply¬ ( a∧b ) { \ displaystyle { \ displaystyle \ lnot ( a \ land b ) } }. the basic implementations can be understood from the image on the left below : if either of the switches s1 or s2 is open, thepull - up resistorr will set the output signal q to 1 ( high ). if s1 and s2 are both closed, the pull - up resistor will
https://en.wikipedia.org/wiki/NAND_gate
the basic implementations can be understood from the image on the left below : if either of the switches s1 or s2 is open, thepull - up resistorr will set the output signal q to 1 ( high ). if s1 and s2 are both closed, the pull - up resistor will be overridden by the switches, and the output will be 0 ( low ). in thedepletion - load nmos logicrealization in the middle below, the switches are the transistors t2 and t3, and the transistor t1 fulfills the function of the pull - up resistor. in thecmosrealization on the right below, the switches are then - typetransistors t3 and t4, and the pull - up resistor is made up of thep - typetransistors t1 and t2, which form the complement of transistors t3 and t4. in cmos, nand gates are more efficient thannor gates. this is due to the faster charge mobility in n - mosfets compared to p - mosfets, so that the parallel connection of two p - mosfets ( t1 and t2 ) realised in the nand gate is more favourable than their series connection in the nor gate. for this reason, nand gates are generally preferred over nor gates in cmos circuits. [ 1 ] nand gates are basic logic gates, and as such they are recognised inttlandcmosics. the standard, 4000 series, cmosicis the 4011, which includes four independent, two - input, nand gates. these devices are available from many semiconductor manufacturers. these are usually available in both through - holedilandsoicformats. datasheets are readily available in mostdatasheet databases. the standard two -, three -, four - and eight - input nand gates are available : the nand gate has the property offunctional completeness, which it shares with thenor gate. that is, any other logic function ( and, or, etc. ) can be implemented using only nand gates. [ 2 ] an entire processor can be created using nand gates alone. in ttl ics using multiple - emittertransistors, it also requires fewer transistors than a nor gate. as nor gates are also functionally complete, if no specific nand gates are available, one can be made fromnorgates usingnor logic. [
https://en.wikipedia.org/wiki/NAND_gate
created using nand gates alone. in ttl ics using multiple - emittertransistors, it also requires fewer transistors than a nor gate. as nor gates are also functionally complete, if no specific nand gates are available, one can be made fromnorgates usingnor logic. [ 2 ]
https://en.wikipedia.org/wiki/NAND_gate
innatural language processing, asentence embeddingis a representation of a sentence as avectorof numbers which encodes meaningful semantic information. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] state of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models. bertpioneered an approach involving the use of a dedicated token prepended to the beginning of each sentence inputted into the model ; the final hidden state vector of this token encodes information about the sentence and can be fine - tuned for use in sentence classification tasks. in practice however, bert ' s sentence embedding with the token achieves poor performance, often worse than simply averaging non - contextual word embeddings. sbertlater achieved superior sentence embedding performance [ 8 ] by fine tuning bert ' s token embeddings through the usage of asiamese neural networkarchitecture on the snli dataset. other approaches are loosely based on the idea ofdistributional semanticsapplied to sentences. skip - thoughttrains an encoder - decoder structure for the task of neighboring sentences predictions ; this has been shown to achieve worse performance than approaches such asinfersentor sbert. an alternative direction is to aggregate word embeddings, such as those returned byword2vec, into sentence embeddings. the most straightforward approach is to simply compute the average of word vectors, known as continuous bag - of - words ( cbow ). [ 9 ] however, more elaborate solutions based on word vector quantization have also been proposed. one such approach is the vector of locally aggregated word embeddings ( vlawe ), [ 10 ] which demonstrated performance improvements in downstream text classification tasks. in recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search. langchainfor instance utilizes sentence transformers for purposes of indexing documents. in particular, an indexing is generated by generating embeddings for chunks of documents and storing ( document chunk, embedding ) tuples. then given a query in natural language, the embedding for the query can be generated. a top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant
https://en.wikipedia.org/wiki/Sentence_embedding
for chunks of documents and storing ( document chunk, embedding ) tuples. then given a query in natural language, the embedding for the query can be generated. a top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information forquestion answeringtasks. this approach is also known formally asretrieval - augmented generation [ 11 ] though not as predominant as bertscore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing alarge language model ' s generation parameters is often performed via comparing candidate sentences against reference sentences. by using the cosine - similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid - search algorithm can be utilized to automatehyperparameter optimization [ citation needed ]. a way of testing sentence encodings is to apply them on sentences involving compositional knowledge ( sick ) corpus [ 12 ] for both entailment ( sick - e ) and relatedness ( sick - r ). in [ 13 ] the best results are obtained using abilstm networktrained on thestanford natural language inference ( snli ) corpus. thepearson correlation coefficientfor sick - r is 0. 885 and the result for sick - e is 86. 3. a slight improvement over previous scores is presented in : [ 14 ] sick - r : 0. 888 and sick - e : 87. 8 using a concatenation of bidirectionalgated recurrent unit.
https://en.wikipedia.org/wiki/Sentence_embedding
this is a list ofnotableapplications ( apps ) that run on theandroid platformwhich meet guidelines forfree softwareandopen - source software. there are a number of third - party maintained lists of open - source android applications, including :
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications
advanced planning and scheduling ( aps, also known asadvanced manufacturing ) refers to amanufacturing management processby whichraw materialsand production capacity are optimally allocated to meet demand. [ 1 ] aps is especially well - suited to environments where simpler planning methods cannot adequately address complex trade - offs between competing priorities. production scheduling is intrinsically very difficult due to the ( approximately ) factorialdependence of the size of the solution space on the number of items / products to be manufactured. traditionalproduction planningandschedulingsystems ( such asmanufacturing resource planning ) use a stepwise procedure to allocate material and production capacity. this approach is simple but cumbersome, and does not readily adapt to changes in demand, resource capacity or material availability. materials and capacity are planned separately, and many systems do not consider material or capacity constraints, leading to infeasible plans. however, attempts to change to the new system have not always been successful, which has called for the combination of management philosophy with manufacturing. unlike previous systems, aps simultaneously plans and schedules production based on available materials, labor and plant capacity. aps has commonly been applied where one or more of the following conditions are present : advanced planning & scheduling software enables manufacturing scheduling and advanced scheduling optimization within these environments. [ 1 ]
https://en.wikipedia.org/wiki/Advanced_planning_and_scheduling
many countries around the world maintainmarinesandnaval infantrymilitary units. even if only a few nations have the capabilities to launch major amphibious assault operations, most marines and naval infantry forces are able to carry out limitedamphibious landings, riverine andcoastal warfaretasks. the list includes also army units specifically trained to operate as marines or naval infantry forces, and navy units with specialized naval security and boarding tasks. themarine fusiliers regimentsare the marine infantry regiments of thealgerian navyand they are specialised inamphibious warfare. [ 1 ] the rfm have about 7000 soldiers in their ranks. within the algerian navy there are 8 regiments of marine fusiliers : future marine fusiliers andmarine commandosare trained in : army navy army navy the idf ' s35th parachute brigade " flying serpent " is aparatroopersbrigade that also exercises sea landing capabilities. the italian army ' scavalry brigade " pozzuolo del friuli " forms with theitalian navy ' s 3rd naval division andsan marco marine brigadetheitalian military ' s national sea projection capability ( forza di proiezione dal mare ). additionally the 17th anti - aircraft artillery regiment " sforzesca " provides air - defense assets :
https://en.wikipedia.org/wiki/List_of_marines_and_similar_forces
derivative - free optimization ( sometimes referred to asblackbox optimization ) is a discipline inmathematical optimizationthat does not usederivativeinformation in the classical sense to find optimal solutions : sometimes information about the derivative of the objective functionfis unavailable, unreliable or impractical to obtain. for example, fmight be non - smooth, or time - consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them viafinite differencesare of little use. the problem to find optimal points in such situations is referred to as derivative - free optimization, algorithms that do not use derivatives or finite differences are calledderivative - free algorithms. [ 1 ] the problem to be solved is to numerically optimize an objective functionf : a→r { \ displaystyle f \ colon a \ to \ mathbb { r } } for someseta { \ displaystyle a } ( usuallya⊂rn { \ displaystyle a \ subset \ mathbb { r } ^ { n } } ), i. e. findx0∈a { \ displaystyle x _ { 0 } \ in a } such that without loss of generalityf ( x0 ) ≤f ( x ) { \ displaystyle f ( x _ { 0 } ) \ leq f ( x ) } for allx∈a { \ displaystyle x \ in a }. when applicable, a common approach is to iteratively improve a parameter guess by local hill - climbing in the objective function landscape. derivative - based algorithms use derivative information off { \ displaystyle f } to find a good search direction, since for example the gradient gives the direction of steepest ascent. derivative - based optimization is efficient at finding local optima for continuous - domain smooth single - modal problems. however, they can have problems when e. g. a { \ displaystyle a } is disconnected, or ( mixed - ) integer, or whenf { \ displaystyle f } is expensive to evaluate, or is non - smooth, or noisy, so that ( numeric approximations of ) derivatives do not provide useful information. a slightly different problem is whenf { \ displaystyle f } is multi - modal, in which case local derivative - based methods only give local optima, but might miss the global one. in derivative - free optimization, various methods are employed to address these challenges using only function values off { \ displaystyle f }, but no derivatives. some of
https://en.wikipedia.org/wiki/Derivative-free_optimization
\ displaystyle f } is multi - modal, in which case local derivative - based methods only give local optima, but might miss the global one. in derivative - free optimization, various methods are employed to address these challenges using only function values off { \ displaystyle f }, but no derivatives. some of these methods can be proved to discover optima, but some are rather metaheuristic since the problems are in general more difficult to solve compared toconvex optimization. for these, the ambition is rather to efficiently find " good " parameter values which can be near - optimal given enough resources, but optimality guarantees can typically not be given. one should keep in mind that the challenges are diverse, so that one can usually not use one algorithm for all kinds of problems. notable derivative - free optimization algorithms include : there exist benchmarks for blackbox optimization algorithms, see e. g. the bbob - biobj tests. [ 2 ]
https://en.wikipedia.org/wiki/Derivative-free_optimization
dependency hellis acolloquial termfor the frustration of some software users who have installedsoftware packageswhich havedependencieson specificversionsof other software packages. [ 1 ] the dependency issue arises when several packages have dependencies on the samesharedpackages or libraries, but they depend on different and incompatible versions of the shared packages. if the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. this, in turn, may break other dependencies and push the problem to another set of packages. dependency hell takes several forms : on specificcomputing platforms, " dependency hell " often goes by a local specific name, generally the name of components.
https://en.wikipedia.org/wiki/Dependency_hell
stratisis auser - spaceconfigurationdaemonthat configures and monitors existing components fromlinux ' s underlying storage components oflogical volume management ( lvm ) andxfsfilesystem viad - bus. stratis is not a user - levelfilesystemlike thefilesystem in userspace ( fuse ) system. stratis configuration daemon was originally developed byred hatto have feature parity withzfsandbtrfs. the hope was due to stratis configuration daemon being in userland, it would more quickly reach maturity versus the years of kernel level development of file systems zfs and btrfs. [ 2 ] [ 3 ] it is built upon enterprise - tested components lvm and xfs with over a decade of enterprise deployments and the lessons learned from system storage manager inred hat enterprise linux7. [ 4 ] stratis provides zfs / btrfs - style features by integrating layers of existing technology : linux ' sdevice mappersubsystem, and the xfs filesystem. thestratisddaemon manages collections of block devices, and provides a d - busapi. thestratis - clidnfpackageprovides a command - line toolstratis, which itself uses the d - bus api to communicate withstratisd.
https://en.wikipedia.org/wiki/Stratis_(configuration_daemon)
simicsis afull - system simulatoror virtual platform used to run unchanged production binaries of the target hardware. simics was originally developed by theswedish institute of computer science ( sics ), and then spun off tovirtutechfor commercial development in 1998. virtutech was acquired byintelin 2010. currently, simics is provided byintelin a public release [ 1 ] and sold commercially bywind river systems, which was in the past a subsidiary of intel. simics contains bothinstruction set simulatorsand hardware models, and is or has been used to simulate systems such asalpha, arm ( 32 - and 64 - bit ), ia - 64, mips ( 32 - and 64 - bit ), msp430, powerpc ( 32 - and64 - bit ), risc - v ( 32 - and64 - bit ), sparc - v8 and v9, andx86andx86 - 64cpus. many different operating systems have been run on various simulated virtual platforms, includinglinux, ms - dos, windows, vxworks, ose, solaris, freebsd, qnx, rtems, uefi, andzephyr. thenetbsdamd64 port was initially developed using simics before the public release of the chip. [ 2 ] the purpose of simulation in simics is often to develop software for a particular type of hardware without requiring access to that precise hardware, using simics as avirtual platform. this can applied both to pre - release and pre - silicon software development for future hardware, as well as for existing hardware. intel uses simics to provide its ecosystem with access to future platform months or years ahead of the hardware launch. [ 3 ] the current version of simics is 6 which was released publicly in 2019. [ 4 ] [ 5 ] simics runs on 64 - bit x86 - 64 machines runningmicrosoft windowsandlinux ( 32 - bit support was dropped with the release of simics 5, since 64 - bit provides significant performance advantages and is universally available on current hardware ). the previous version, simics 5, was released in 2015. [ 6 ] simics has the ability to execute a system in forward and reverse direction. [ 7 ] reverse debuggingcan illuminate how an exceptional condition orbugoccurred. when executing an os such aslinuxin reverse using simics, previously deleted files reappear when the del
https://en.wikipedia.org/wiki/Simics
2015. [ 6 ] simics has the ability to execute a system in forward and reverse direction. [ 7 ] reverse debuggingcan illuminate how an exceptional condition orbugoccurred. when executing an os such aslinuxin reverse using simics, previously deleted files reappear when the deletion point is passed in reverse and scrolling and other graphical display and console updates occur backwards as well. simics is built for high performance execution of full - system models, and uses bothbinary translationandhardware - assisted virtualizationto increase simulation speed. it is natively multithreaded and can simulate multiple target ( or guest ) processors and boards using multiple host threads. it has been used to run simulations containing hundreds of target processors. thisemulation - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Simics
freeotpis afree and open - sourceauthenticatorbyredhat. it implementsmulti - factor authenticationusinghotpandtotp. tokenscan be added by scanning aqr codeor by manually entering the token configuration. it is licensed under theapache 2. 0 license, and supportsandroidandios. [ 4 ] [ 5 ] [ 6 ] this mobile software article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/FreeOTP
in statistics, grubbs ' s testor thegrubbs test ( named afterfrank e. grubbs, who published the test in 1950 [ 1 ] ), also known as themaximum normalizedresidualtestorextreme studentized deviate test, is atestused to detectoutliersin aunivariatedata setassumed to come from anormally distributedpopulation. grubbs ' s test is based on the assumption ofnormality. that is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the grubbs test. [ 2 ] grubbs ' s test detects one outlier at a time. this outlier is expunged from the dataset and the test is iterated until no outliers are detected. however, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers. [ 3 ] grubbs ' s test is defined for the followinghypotheses : the grubbstest statisticis defined as { \ displaystyle { \ overline { y } } } ands { \ displaystyle s } denoting thesample meanandstandard deviation, respectively. the grubbs test statistic is the largestabsolute deviationfrom the sample mean in units of the sample standard deviation. this is thetwo - sided test, for which the hypothesis of no outliers is rejected atsignificance levelα if withtα / ( 2n ), n−2denoting the uppercritical valueof thet - distributionwithn− 2degrees of freedomand a significance level of α / ( 2n ). grubbs ' s test can also be defined as a one - sided test, replacing α / ( 2n ) with α / n. to test whether the minimum value is an outlier, the test statistic is withymindenoting the minimum value. to test whether the maximum value is an outlier, the test statistic is withymaxdenoting the maximum value. severalgraphical techniquescan be used to detect outliers. a simplerun sequence plot, abox plot, or ahistogramshould show any obviously outlying points. anormal probability plotmay also be useful. this article incorporatespublic domain
https://en.wikipedia.org/wiki/Grubbs%27s_test
statistic is withymaxdenoting the maximum value. severalgraphical techniquescan be used to detect outliers. a simplerun sequence plot, abox plot, or ahistogramshould show any obviously outlying points. anormal probability plotmay also be useful. this article incorporatespublic domain materialfrom thenational institute of standards and technology
https://en.wikipedia.org/wiki/Grubbs%27s_test
asoftware design description ( a. k. a. software design documentorsdd ; justdesign document ; alsosoftware design specification ) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design ’ s stakeholders. [ 1 ] an sdd usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work. the sdd usually contains the following information : these design mediums enable the designer to represent procedural detail, that facilitates translation to code. this blueprint for implementation forms the basis for all subsequent software engineering work. ieee 1016 - 2009, titledieee standard for information technology — systems design — software design descriptions, [ 2 ] is anieeestandard that specifies " the required information content and organization " for an sdd. [ 3 ] ieee 1016 does not specify the medium of an sdd ; it is " applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions. " [ 4 ] the 2009 edition was a major revision to ieee 1016 - 1998, elevating it from recommended practice to full standard. this revision was modeled afterieee std 1471 - 2000, recommended practice for architectural description of software - intensive systems, extending the concepts ofview, viewpoint, stakeholder, and concernfrom architecture description to support documentation of high - level and detailed design and construction of software. [ ieee 1016, introduction ] following the ieee 1016 conceptual model, an sdd is organized into one or more design views. each design view follows the conventions of its design viewpoint. ieee 1016 defines the following design viewpoints for use : [ 5 ] in addition, users of the standard are not limited to these viewpoints but may define their own. [ 6 ] ieee 1016 - 2009 is currently listed as ' inactive - reserved '. [ 7 ]
https://en.wikipedia.org/wiki/Design_document
theinternational conference on automated planning and scheduling ( icaps ) is a leading internationalacademic conferenceinautomated planning and schedulingheld annually for researchers and practitioners in planning and scheduling. [ 2 ] [ 3 ] [ 4 ] icaps is supported by thenational science foundation, the journalartificial intelligence, and other supporters. [ 5 ] icaps conducts the international planning competition ( ipc ), a competition scheduled every few years that empirically evaluates state - of - the - art planning systems on a collection of benchmark problems. [ 6 ] theplanning domain definition language ( pddl ) was developed mainly to make the 1998 / 2000 international planning competition possible, and then evolved with each competition. pddl is an attempt to standardize artificial intelligence ( ai ) planning languages. [ 7 ] [ 8 ] pddl was first developed bydrew mcdermottand his colleagues in 1998, inspired bystrips, adl, and other sources. the icaps conferences began in 2003 as a merge of two bi - annual conferences, the international conference on artificial intelligence planning and scheduling ( aips ) and the european conference on planning ( ecp ). [ 1 ] thisartificial intelligence - related article is astub. you can help wikipedia byexpanding it. this article about a computer conference is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/International_Conference_on_Automated_Planning_and_Scheduling
least slack time ( lst ) schedulingis analgorithmfordynamic priority scheduling. it assigns priorities to processes based on theirslack time. slack time is the amount of time left after a job if the job was started now. this algorithm is also known asleast laxity first. its most common use is inembedded systems, especially those with multiple processors. it imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do nothave an affinity toa certain processor. this is what lends it a suitability to embedded systems. this scheduling algorithm first selects those processes that have the smallest " slack time ". slack time is defined as the temporal difference between the deadline, the ready time and the run time. more formally, theslack times { \ displaystyle s } for a process is defined as : s = ( d−t ) −c ′ { \ displaystyle s = ( d - t ) - c ' } whered { \ displaystyle d } is the process deadline, t { \ displaystyle t } is the real time since the cycle start, andc ′ { \ displaystyle c ' } is the remaining computation time. in realtime scheduling algorithms for periodic jobs, an acceptance test is needed before accepting a sporadic job with a hard deadline. one of the simplest acceptance tests for a sporadic job is calculating the amount of slack time between the release time and deadline of the job. lst scheduling is most useful in systems comprising mainly aperiodic tasks, because no prior assumptions are made on the events ' rate of occurrence. the main weakness of lst is that it does not look ahead, and works only on the current system state. thus, during a brief overload of system resources, lst can be suboptimal. it will also be suboptimal when used with uninterruptible processes. however, like theearliest deadline first, and unlikerate monotonic scheduling, this algorithm can be used for processor utilization up to 100 %.
https://en.wikipedia.org/wiki/Least_slack_time_scheduling
inintelligent networks ( in ) and cellular networks, service layeris a conceptual layer within a network service provider architecture. it aims at providingmiddlewarethat serves third - partyvalue - added servicesand applications at a higherapplication layer. the service layer providescapability serversowned by a telecommunication network service provider, accessed through open and secureapplication programming interfaces ( apis ) by application layer servers owned by third - partycontent providers. the service layer also provides an interface to core networks at a lower resource layer. [ 1 ] the lower layers may also be namedcontrol layerandtransport layer ( the transport layer is also referred to as theaccess layerin some architectures ). [ citation needed ] the concept of service layer is used in contexts such asintelligent networks ( in ), wap, 3gandip multimedia subsystem ( ims ). it is defined in the3gppopen services architecture ( osa ) model, which reused the idea of theparlayapi for third - party servers. in software design, for exampleservice - oriented architecture, the concept of service layer has a different meaning. the service layer of animsarchitecture provides multimedia services to the overall ims network. this layer contains network elements which connect to the serving - cscf ( call session control function ) using the ip multimedia subsystem service control interface ( isc ). [ 2 ] the isc interface uses thesipsignalling protocol. the network elements contained within the service layer are generically referred to as ' service platforms ' however the 3gpp specification ( 3gpp ts 23. 228 v8. 7. 0 ) defines several types of service platforms : the sip application server ( as ) performs the same function as atelephony application serverin a pre - ims network, however it is specifically tailored to support the sip signalling protocol for use in an ims network. an osa service capability server acts as a secure gateway between the ims network and an application which runs upon theopen services architecture ( this is typically asiptoparlaygateway ) the im - ssf ( ip multimedia service switching function ) acts as a gateway between the ims network and application servers using other telecommunication signalling standards such asinapandcamel. inservice - oriented architecture ( soa ), the service layer is the third layer in a five - abstraction - layer model
https://en.wikipedia.org/wiki/Service_layer
the im - ssf ( ip multimedia service switching function ) acts as a gateway between the ims network and application servers using other telecommunication signalling standards such asinapandcamel. inservice - oriented architecture ( soa ), the service layer is the third layer in a five - abstraction - layer model. the model consists of object layer, component layer, service layer, process layer and enterprise layer. [ 3 ] the service layer can be considered as a bridge between the higher and lower layers, and is characterized by a number of services that are carrying out individual business functions.
https://en.wikipedia.org/wiki/Service_layer
infinance, agrowth stockis astockof a company that generates substantial and sustainable positivecash flowand whoserevenuesandearningsare expected to increase at a faster rate than the average company within the same industry. [ 1 ] a growth company typically has some sort ofcompetitive advantage ( a new product, a breakthrough patent, overseas expansion ) that allows it to fend off competitors. growth stocks usually pay smallerdividends, as the companies typically reinvest mostretained earningsin capital - intensive projects. analysts computereturn on equity ( roe ) by dividing a company ' s net income into averagecommon equity. to be classified as a growth stock, analysts generally expect companies to achieve a 15 percent or higher return on equity. [ 2 ] can slimis a method which identifies growth stocks and was created bywilliam o ' neila stock broker and publisher ofinvestor ' s business daily. [ 3 ] in academic finance, thefama – french three - factor modelrelies onbook - to - market ratios ( b / m ratios ) to identify growth vs. value stocks. [ 4 ] some advisors suggest investing half the portfolio using the value approach and other half using the growth approach. [ 5 ] the definition of a " growth stock " differs among some well - known investors. for example, warren buffettdoes not differentiate between value and growth investing. in his 1992 letter to shareholders, he stated that many analysts consider growth and value investing to be opposites which he characterized " fuzzy thinking. " [ 6 ] furthermore, buffett cautions investors against overpaying for growth stocks, noting that growth projections are often overly optimistic. instead, he prioritizes companies with a durable competitive advantage and a highreturn on capital, rather than focusing solely on revenue or earnings growth. [ 7 ] peter lynchclassifies stocks into four categories : " slow growers, " " stalwarts, " " fast growers, " and " turnarounds. " [ 8 ] he is known for focusing on what he calls " fast growers " referring to companies that grow at rates of 20 % or higher. however, like buffett, lynch also believes in not overpaying for stocks emphasizing that investors should use their " edge " to find companies with high earnings potential that are not yet overvalued. [ 9 ] he recommends investing in companies with p / e ratios equal to or lower than their growth rates and suggests holding these
https://en.wikipedia.org/wiki/Growth_stock
like buffett, lynch also believes in not overpaying for stocks emphasizing that investors should use their " edge " to find companies with high earnings potential that are not yet overvalued. [ 9 ] he recommends investing in companies with p / e ratios equal to or lower than their growth rates and suggests holding these investments for three to five years. [ 8 ] he is often credited for popularizing thepeg ratioto analyze growth stocks. [ 10 ] this finance - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Growth_stock
theinode pointer structureis a structure adopted by theinodeof a file in theversion 6 unixfile system, version 7 unixfile system, andunix file system ( ufs ) to list the addresses of a file ' sdata blocks. it is also adopted by many related file systems, including theext3file system, popular with linux users. in the file system used inversion 6 unix, an inode contains eight pointers : [ 1 ] in the file system used inversion 7 unix, an inode contains thirteen pointers : [ 2 ] in theunix file system, an inode contains fifteen pointers : [ 3 ] the levels of indirection indicate the number of pointer that must be followed before reaching actual file data. the structure is partially illustrated in the diagram accompanying this article. the structure allows for inodes to describe very large files in file systems with a fixed logical block size. central to the mechanism is that blocks of addresses ( also calledindirect blocks ) are only allocated as needed. for example, in theunix file system, a 12 - block file would be described using just the inode because its blocks fit into the number of direct pointers available. however, a 13 - block file needs an indirect block to contain the thirteenth address. the inode pointer structure not only allows for files to easily be allocated to non - contiguous blocks, it also allows the data at a particular location inside a file to be easily located. this is possible because the logical block size is fixed. for example, if each block is 8 kb, file data at 112 kb to 120 kb would be pointed to by the third pointer of the first indirect block ( assuming twelve direct pointers in the inode pointer structure ). unlike inodes, which are fixed in number and allocated in a special part of the file system, the indirect blocks may be of any number and are allocated in the same part of the file system as data blocks. the number of pointers in the indirect blocks is dependent on the block size and size of block pointers. example : with a 512 - byte block size, and 4 - byte block pointers, each indirect block can consist of 128 ( 512 / 4 ) pointers.
https://en.wikipedia.org/wiki/Inode_pointer_structure
512 / 4 ) pointers.
https://en.wikipedia.org/wiki/Inode_pointer_structure
nni ( neural network intelligence ) is afree and open - sourceautomltoolkit developed bymicrosoft. [ 3 ] [ 4 ] it is used to automatefeature engineering, model compression, neural architecture search, andhyper - parameter tuning. [ 5 ] [ 6 ] the source code is licensed undermit licenseand available ongithub. [ 7 ] thisartificial intelligence - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Neural_Network_Intelligence
waikato environment for knowledge analysis ( weka ) is a collection of machine learning and data analysisfree softwarelicensed under thegnu general public license. it was developed at theuniversity of waikato, new zealandand is the companion software to the book " data mining : practical machine learning tools and techniques ". [ 1 ] weka contains a collection of visualization tools and algorithms fordata analysisandpredictive modeling, together with graphical user interfaces for easy access to these functions. [ 1 ] the original non - java version of weka was atcl / tkfront - end to ( mostly third - party ) modeling algorithms implemented in other programming languages, plusdata preprocessingutilities inc, and amakefile - based system for running machine learning experiments. this original version was primarily designed as a tool for analyzing data from agricultural domains, [ 2 ] [ 3 ] but the more recent fullyjava - based version ( weka 3 ), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. advantages of weka include : weka supports several standarddata miningtasks, more specifically, data preprocessing, clustering, classification, regression, visualization, andfeature selection. input to weka is expected to be formatted according the attribute - relational file format and with the filename bearing the. arff extension. all of weka ' s techniques are predicated on the assumption that the data is available as one flat file or relation, where each data point is described by a fixed number of attributes ( normally, numeric or nominal attributes, but some other attribute types are also supported ). weka provides access tosqldatabasesusingjava database connectivityand can process the result returned by a database query. weka provides access todeep learningwithdeeplearning4j. [ 4 ] it is not capable of multi - relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for processing using weka. [ 5 ] another important area that is currently not covered by the algorithms included in the weka distribution is sequence modeling. in version 3. 7. 2, a package manager was added to allow the easier installation of extension packages. [ 6 ] some functionality that used to be included with weka prior to this version has since been moved into such extension packages, but this change also makes
https://en.wikipedia.org/wiki/Weka_(machine_learning)
algorithms included in the weka distribution is sequence modeling. in version 3. 7. 2, a package manager was added to allow the easier installation of extension packages. [ 6 ] some functionality that used to be included with weka prior to this version has since been moved into such extension packages, but this change also makes it easier for others to contribute extensions to weka and to maintain the software, as this modular architecture allows independent updates of the weka core and individual extensions.
https://en.wikipedia.org/wiki/Weka_(machine_learning)
iso11784andiso 11785areinternational standardsthat regulate theradio - frequency identification ( rfid ) of animals, which is usually accomplished by implanting, introducing or attaching a transponder containing a microchip to an animal. rf identification of animals requires that the bits transmitted by atransponderare interpretable by atransceiver. usually, thebit streamcontains data bits, defining the identification code and a number of bits to ensure correct reception of the data bits. iso 11784 specifies the structure of the identification code. iso 11785 specifies how a transponder is activated and how the stored information is transferred to a transceiver ( the characteristics of the transmission protocols between transponder and transceiver ) these standards are updated and expanded iniso 14223which regulates " advanced " transponders for animals, andiso 24631which regulates testing procedures for conformance with iso 11784 & 11785 as well as performance. the technical concept of animal identification described is based on the principle ofradio - frequency identification ( rfid ). iso 11785 is applicable in connection with iso 11784 which describes the structure and the information content of the codes stored in the transponder. the international organization for standardization ( iso ) draws attention to the fact that compliance with clause 6 and annex a of this international standard may involve the use of patents concerning methods of transmission. thecarrier frequencyfor animal identification is 134. 2 khz. there are two iso approved protocols in use to communicate between tag and reader : fdx - a which uses the 125khz frequency and a 10 bit code is not iso compliant. in dbp a1is encoded as00or11and a0is encoded as01or10, such that there is at least one transition per bit ( so11is encoded as0011and not as0000or1111 ) iso 11784 : 1996 radio - frequency identification of animals - code structure the first three digits of the id are the manufacturer code. with half - duplex, the tag must store sufficient energy when the receiver ' s activating field is turned on to allow it to transmit when the activating field is switched off. this makes the receiver simpler, as it is not necessary to pick up the weak signal from the tag among the strong activating field. the disadvantage is that the hdx tag can not transmit when the activating field is turned on. telegram layout : with
https://en.wikipedia.org/wiki/ISO_11784_and_ISO_11785
allow it to transmit when the activating field is switched off. this makes the receiver simpler, as it is not necessary to pick up the weak signal from the tag among the strong activating field. the disadvantage is that the hdx tag can not transmit when the activating field is turned on. telegram layout : with full duplex, the tag can transmit immediately when in the presence of the receiver ' s activating field. the advantage is that the fdx tag can then transmit continuously and can therefore be read more quickly and more often. telegram layout : in fdx ( at least ), after the 11 startbits, a framing bit ( ' 1 ' ) is sent after every 8 data bits. compliance with the standards may require use of techniques which are covered by ( or claimed to be covered by ) certain patents. iso takes no position concerning the evidence, validity and scope of these patent rights. some patent holder has assured iso that they will not exert their patent rights concerning fdx b technology. [ citation needed ] other patent holders have assured iso that they are willing to negotiate licenses underreasonable and non - discriminatoryterms and conditions with applicants through the world. in this respect, the statement of the holders of these patent rights are registered with iso. attention is moreover drawn to the possibility that some of the elements of this international standard may be the subject of patent rights other than those identified above. iso shall not be held responsible for identifying any or all such patent rights. in that connection, additional correspondences were received from two other companies not willing to forward pertinent declaration in accordance with the current iso directives.
https://en.wikipedia.org/wiki/ISO_11784_and_ISO_11785
incomputer science, bounds - checking eliminationis acompiler optimizationuseful inprogramming languagesorruntime systemsthat enforcebounds checking, the practice of checking every index into anarrayto verify that the index is within the defined valid range of indexes. [ 1 ] its goal is to detect which of these indexing operations do not need to be validated atruntime, and eliminating those checks. one common example is accessing an array element, modifying it, and storing the modified value in the same array at the same location. normally, this example would result in a bounds check when the element is read from the array and a second bounds check when the modified element is stored using the same array index. bounds - checking elimination could eliminate the second check if the compiler or runtime can determine that neither the array size nor the index could change between the two array operations. another example occurs when a programmerloops overthe elements of the array, and the loop condition guarantees that the index is within the bounds of the array. it may be difficult to detect that the programmer ' s manual check renders the automatic check redundant. however, it may still be possible for the compiler or runtime to perform proper bounds - checking elimination in this case. one technique for bounds - checking elimination is to use a typedstatic single assignment formrepresentation and for each array to create a new type representing a safe index for that particular array. the first use of a value as an array index results in a runtime type cast ( and appropriate check ), but subsequently the safe index value can be used without a type cast, without sacrificing correctness or safety. just - in - time compiledlanguages such asjavaandc # often check indexes at runtime before accessingarrays. some just - in - time compilers such ashotspotare able to eliminate some of these checks if they discover that the index is always within the correct range, or if an earlier check would have already thrown an exception. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Bounds-checking_elimination
compound - term processing, ininformation - retrieval, is search result matching on the basis ofcompound terms. compound terms are built by combining two or more simple terms ; for example, " triple " is a single word term, but " triple heart bypass " is a compound term. compound - term processing is a new approach to an old problem : how can one improve the relevance of search results while maintaining ease of use? using this technique, a search forsurvival rates following a triple heart bypass in elderly peoplewill locate documents about this topic even if this precise phrase is not contained in any document. this can be performed by aconcept search, which itself uses compound - term processing. this will extract the key concepts automatically ( in this case " survival rates ", " triple heart bypass " and " elderly people " ) and use these concepts to select the most relevant documents. in august 2003, concept searching limitedintroduced the idea of using statistical compound - term processing. [ 1 ] clamour is a european collaborative project which aims to find a better way to classify when collecting and disseminating industrial information and statistics. clamour appears to use a linguistic approach, rather than one based onstatistical modelling. [ 2 ] techniques for probabilistic weighting of single word terms date back to at least 1976 in the landmark publication bystephen e. robertsonandkaren sparck jones. [ 3 ] robertson stated that the assumption of word independence is not justified and exists as a matter of mathematical convenience. his objection to the term independence is not a new idea, dating back to at least 1964 when h. h. williams stated that " [ t ] he assumption of independence of words in a document is usually made as a matter of mathematical convenience ". [ 4 ] in 2004, anna lynn patterson filed patents on " phrase - based searching in an information retrieval system " [ 5 ] to whichgooglesubsequently acquired the rights. [ 6 ] statistical compound - term processing is more adaptable than the process described by patterson. her process is targeted at searching theworld wide webwhere an extensive statistical knowledge of common searches can be used to identify candidate phrases. statistical compound term processing is more suited toenterprise searchapplications where sucha prioriknowledge is not available. statistical compound - term processing is also more adaptable than the linguistic approach taken by the clamour project, which must consider the syntactic properties
https://en.wikipedia.org/wiki/Compound-term_processing
be used to identify candidate phrases. statistical compound term processing is more suited toenterprise searchapplications where sucha prioriknowledge is not available. statistical compound - term processing is also more adaptable than the linguistic approach taken by the clamour project, which must consider the syntactic properties of the terms ( i. e. part of speech, gender, number, etc. ) and their combinations. clamour is highly language - dependent, whereas the statistical approach is language - independent. compound - term processing allows information - retrieval applications, such assearch engines, to perform their matching on the basis of multi - word concepts, rather than on single words in isolation which can be highly ambiguous. early search engines looked for documents containing the words entered by the user into the search box. these are known askeyword searchengines. boolean searchengines add a degree of sophistication by allowing the user to specify additional requirements. for example, " tiger near woods and ( golf or golfing ) not volkswagen " uses the operators " near ", " and ", " or " and " not " to specify that these words must follow certain requirements. aphrase searchis simpler to use, but requires that the exact phrase specified appear in the results.
https://en.wikipedia.org/wiki/Compound-term_processing
wabun code ( 和 文 モールス, wabun morusu fugo, morse code for japanese text ) is a form ofmorse codeused to sendjapanese languageinkanacharacters. [ 1 ] unlikeinternational morse code, which represents letters of thelatin script, in wabun each symbol represents a japanesekana. [ 2 ] for this reason, wabun codeis also sometimes calledkana code. [ 3 ] when wabun code is intermixed with international morse code, theprosigndo ( ) is used to announce the beginning of wabun, and the prosignsn ( ) is used to announce the return to international code. kana inirohaorder.
https://en.wikipedia.org/wiki/Wabun_code
thecity repair projectis a501 ( c ) ( 3 ) non - profit organization based inportland, oregon. its focus is education and activism for community building. the organizational motto is " the city repair project is group of citizen activists creating public gathering places and helping others to creatively transform the places where they live. " [ 2 ] city repair is an organization primarily run by volunteers. a board of directors oversees the project ' s long - term vision, and a council maintains its daily operations. both the board of directors and council meet monthly. city repair ' s work focuses on localization andplacemaking. the city repair project maintains an office in portland. the city repair project was founded in 1996 by a small group of neighbors interested insustainabilityand neighborhood activism. [ 3 ] the first city repair action was an intersection repair at share - it square at se 9th ave and se sherrett street. an intersection repair is a place where two streets crossed that is painted by the members of that neighborhood. the street is closed down during the painting. the first intersection repair that happened was at share - it square. other intersection repairs include sunnyside piazza. [ 4 ] [ 5 ] city repair hosts two events annually, portland ' searth daycelebration and the village building convergence. [ 6 ] past projects include the t - horse, a small pick - up truck converted into a mobile tea house. the t - horse was driven to neighborhood sites and events around portland and served freechaiand pie. [ citation needed ] the organization has inspired groups around the united states to start their own city repair projects. unaffiliated city repairs exist in california, washington, minnesota, and other places. thevillage building convergence ( vbc ) is an annual 10 - day event held every may inportland, oregon, united states. the event is coordinated by the city repair project and consists of a series of workshops incorporatingnatural buildingandpermaculturedesign at multiple sites around the city. many of the workshops center on " intersection repairs " which aim to transform street intersections into public gathering spaces. in 1996, neighbors in thesellwood neighborhoodof portland at the intersection of 8th and sherrett created a tea stand, children ' s playhouse and community library on the corner and renamed it " share - it square ". [ 7 ] community organizers founded the city repair project that same year, seeking to share their vision with the community. in january 2000, theportland city councilpassed ordinance #
https://en.wikipedia.org/wiki/City_repair_project
sherrett created a tea stand, children ' s playhouse and community library on the corner and renamed it " share - it square ". [ 7 ] community organizers founded the city repair project that same year, seeking to share their vision with the community. in january 2000, theportland city councilpassed ordinance # 172207, an " intersection repair " ordinance, allowing neighborhoods to develop public gathering places in certain street intersections. [ 8 ] the first village building convergence took place in may 2002, then called the natural building convergence. during its history, the vbc has coordinated the creation of over 72 natural building and permaculture sites in portland, including information kiosks, painted intersections, cobbenches, and astrawbale houseatdignity village. the sites are primarily located in the southeast quadrant of portland. natural builders from around the world have coordinated the activities at many of the construction sites at the village building convergence. most of the labor taking place at the sites is done by volunteers. the vbc hosts a series of workshops, many of which are free to the public. topics of the workshops are usually related tosustainabilityandnatural building. past workshops have includedaikidolessons, outdoor mushroom cultivation, bioswalecreation, andnonviolent communication. [ 9 ] the vbc also hosts speakers and entertainment during the evenings of its convergences. presentations for the 2007 convergence were made atdisjectabystarhawk, michael lerner, andpaul stamets. [ 10 ] prior years ' presentations have been given bymalik rahim, toby hemenway, and judy bluehorse.
https://en.wikipedia.org/wiki/City_repair_project
inoptics, theoptical sine theoremstates that the products of the index, height, andsineof the slope angle of a ray in object space and its corresponding ray inimage spaceare equal. that is : thisoptics - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Optical_sine_theorem
inpropositional logicandboolean algebra, there is aduality betweenconjunctionanddisjunction, [ 1 ] [ 2 ] [ 3 ] also called theduality principle. [ 4 ] [ 5 ] [ 6 ] it is the most widely known example of duality in logic. [ 1 ] the duality consists in thesemetalogicaltheorems : the connectives may be defined in terms of each other as follows : since thedisjunctive normal form theoremshows that the set of connectives { ∧, ∨, ¬ } { \ displaystyle \ { \ land, \ vee, \ neg \ } } isfunctionally complete, these results show that the sets of connectives { ∧, ¬ } { \ displaystyle \ { \ land, \ neg \ } } and { ∨, ¬ } { \ displaystyle \ { \ vee, \ neg \ } } are themselves functionally complete as well. de morgan ' s lawsalso follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. [ 1 ] thedualof a sentence is what you get by swapping all occurrences of∨ { \ textstyle \ vee } and∧ { \ textstyle \ land }, while also negating all propositional constants. for example, the dual of ( a∧b∨c ) { \ textstyle ( a \ land b \ vee c ) } would be ( ¬a∨¬b∧¬c ) { \ textstyle ( \ neg a \ vee \ neg b \ land \ neg c ) }. the dual of a formulaφ { \ textstyle \ varphi } is notated asφ∗ { \ textstyle \ varphi ^ { * } }. theduality principlestates that in classical propositional logic, any sentence is equivalent to the negation of its dual. [ 4 ] [ 7 ] { \ displaystyle \ varphi \ models \ psi }. { \ displaystyle { \ overline { \ varphi } } \ models { \ overline { \ psi } } } by uniform substitution of¬pi { \ displaystyle \ neg p _ { i } } forpi { \ displaystyle p _ { i } }. hence, { \ displaystyle \ neg \ psi \ models \ neg \ varphi }, by contraposition ; so finally, { \
https://en.wikipedia.org/wiki/Conjunction/disjunction_duality
by uniform substitution of¬pi { \ displaystyle \ neg p _ { i } } forpi { \ displaystyle p _ { i } }. hence, { \ displaystyle \ neg \ psi \ models \ neg \ varphi }, by contraposition ; so finally, { \ displaystyle \ psi ^ { d } \ models \ varphi ^ { d } }, by the property thatφd { \ displaystyle \ varphi ^ { d } } { \ displaystyle \ neg { \ overline { \ varphi } } }, which was just proved above. [ 7 ] and sinceφdd = φ { \ displaystyle \ varphi ^ { dd } = \ varphi }, it is also true { \ displaystyle \ varphi \ models \ psi } if, and only if, { \ displaystyle \ psi ^ { d } \ models \ varphi ^ { d } }. [ 7 ] and it follows, as a corollary, that { \ displaystyle \ varphi \ models \ neg \ psi }, { \ displaystyle \ varphi ^ { d } \ models \ neg \ psi ^ { d } }. [ 7 ] for a formulaφ { \ displaystyle \ varphi } indisjunctive normal form, the { \ displaystyle { \ overline { \ varphi } } ^ { d } } will be inconjunctive normal form, and given the result that § negation is semantically equivalent to dual, it will be semantically equivalent to¬φ { \ displaystyle \ neg \ varphi }. [ 8 ] [ 9 ] this provides a procedure for converting between conjunctive normal form and disjunctive normal form. [ 10 ] since thedisjunctive normal form theoremshows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. [ 9 ] [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Conjunction/disjunction_duality
manycore processorsare special kinds ofmulti - core processorsdesigned for a high degree ofparallel processing, containing numerous simpler, independentprocessor cores ( from a few tens of cores to thousands or more ). manycore processors are used extensively inembedded computersandhigh - performance computing. manycore processors are distinct frommulti - core processorsin being optimized from the outset for a higher degree ofexplicit parallelism, and for higher throughput ( or lower power consumption ) at the expense of latency and lowersingle - thread performance. the broader category ofmulti - core processors, by contrast, are usually designed to efficiently runbothparallelandserial code, and therefore place more emphasis on high single - thread performance ( e. g. devoting more silicon toout - of - order execution, deeperpipelines, moresuperscalarexecution units, and larger, more general caches ), andshared memory. these techniques devote runtime resources toward figuring out implicit parallelism in a single thread. they are used in systems where they have evolved continuously ( with backward compatibility ) from single core processors. they usually have a ' few ' cores ( e. g. 2, 4, 8 ) and may be complemented by a manycoreaccelerator ( such as agpu ) in aheterogeneous system. cache coherencyis an issue limiting the scaling of multicore processors. manycore processors may bypass this with methods such asmessage passing, [ 1 ] scratchpad memory, dma, [ 2 ] partitioned global address space, [ 3 ] or read - only / non - coherent caches. a manycore processor using anetwork on a chipand local memories gives software the opportunity to explicitly optimise the spatial layout of tasks ( e. g. as seen in tooling developed fortruenorth ). [ 4 ] manycore processors may have more in common ( conceptually ) with technologies originating inhigh - performance computingsuch asclustersandvector processors. [ 5 ] gpus may be considered a form of manycore processor having multipleshader processing units, and only being suitable for highly parallel code ( high throughput, but extremely poor single thread performance ). a number of computers built from multicore processors have one million or more individual cpu cores. examples include : quite a fewsupercomputershave over 5 million cpu cores
https://en.wikipedia.org/wiki/Manycore_processor
having multipleshader processing units, and only being suitable for highly parallel code ( high throughput, but extremely poor single thread performance ). a number of computers built from multicore processors have one million or more individual cpu cores. examples include : quite a fewsupercomputershave over 5 million cpu cores. when there are also coprocessors, e. g. gpus used with, then those cores are not listed in the core - count, then quite a few more computers would hit those targets.
https://en.wikipedia.org/wiki/Manycore_processor
instatistics, thecorrelation ratiois a measure of the curvilinear relationship between thestatistical dispersionwithin individual categories and the dispersion across the whole population or sample. the measure is defined as theratioof twostandard deviationsrepresenting these types of variation. the context here is the same as that of theintraclass correlation coefficient, whose value is the square of the correlation ratio. suppose each observation isyxiwherexindicates the category that observation is in andiis the label of the particular observation. letnxbe the number of observations in categoryxand { \ displaystyle { \ overline { y } } _ { x } } is the mean of the { \ displaystyle { \ overline { y } } } is the mean of the whole population. the correlation ratio η ( eta ) is defined as to satisfy which can be written as i. e. the weighted variance of the category means divided by the variance of all samples. if the relationship between values ofx { \ displaystyle x } and values { \ displaystyle { \ overline { y } } _ { x } } is linear ( which is certainly true when there are only two possibilities forx ) this will give the same result as the square of pearson ' scorrelation coefficient ; otherwise the correlation ratio will be larger in magnitude. it can therefore be used for judging non - linear relationships. the correlation ratioη { \ displaystyle \ eta } takes values between 0 and 1. the limitη = 0 { \ displaystyle \ eta = 0 } represents the special case of no dispersion among the means of the different categories, whileη = 1 { \ displaystyle \ eta = 1 } refers to no dispersion within the respective categories. η { \ displaystyle \ eta } is undefined when all data points of the complete population take the same value. suppose there is a distribution of test scores in three topics ( categories ) : then the subject averages are 36, 33 and 78, with an overall average of 52. the sums of squares of the differences from the subject averages are 1952 for algebra, 308 for geometry and 600 for statistics, adding to 2860. the overall sum of squares of the differences from the overall average is 9640. the difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average : this gives suggesting that most of the
https://en.wikipedia.org/wiki/Correlation_ratio
, 308 for geometry and 600 for statistics, adding to 2860. the overall sum of squares of the differences from the overall average is 9640. the difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average : this gives suggesting that most of the overall dispersion is a result of differences between topics, rather than within topics. taking the square root gives forη = 1 { \ displaystyle \ eta = 1 } the overall sample dispersion is purely due to dispersion among the categories and not at all due to dispersion within the individual categories. for quick comprehension simply imagine all algebra, geometry, and statistics scores being the same respectively, e. g. 5 times 36, 4 times 33, 6 times 78. the limitη = 0 { \ displaystyle \ eta = 0 } refers to the case without dispersion among the categories contributing to the overall dispersion. the trivial requirement for this extreme is that all category means are the same. the correlation ratio was introduced bykarl pearsonas part ofanalysis of variance. ronald fishercommented : " as a descriptive statistic the utility of the correlation ratio is extremely limited. it will be noticed that the number ofdegrees of freedomin the numerator ofη2 { \ displaystyle \ eta ^ { 2 } } depends on the number of the arrays " [ 1 ] to whichegon pearson ( karl ' s son ) responded by saying " again, a long - established method such as the use of the correlation ratio [ § 45 the " correlation ratio " η ] is passed over in a few words without adequate description, which is perhaps hardly fair to the student who is given no opportunity of judging its scope for himself. " [ 2 ]
https://en.wikipedia.org/wiki/Correlation_ratio
aletter bankis a relative of theanagramwhere all the letters of one word ( the " bank " ) can be used as many times as desired ( minimum of once each ) to make a new word or phrase. for example, imps is a bank of mississippi and sprout is a bank ofsupport our troops. as a convention, the bank should have no repeat letters within itself. the term was coined bywill shortz, whose first letter bank ( blume - > bumblebee ) appeared in his 1979 book, " brain games ". in 1980, shortz introduced letter banks to thenational puzzlers ' league ( of which he is the historian ), in the form of a contest puzzle. in 1981, the letter bank was announced an official puzzle type in the npl ’ s magazine " the enigma ". [ 1 ] letter banks are the basis for the word gamealpha blitz. [ citation needed ] thisgame - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Letter_bank
inlinear algebra, the order - rkrylov subspacegenerated by ann - by - nmatrixaand a vectorbof dimensionnis thelinear subspacespannedby theimagesofbunder the firstrpowers ofa ( starting froma0 = i { \ displaystyle a ^ { 0 } = i } ), that is, [ 1 ] [ 2 ] the concept is named after russian applied mathematician and naval engineeralexei krylov, who published a paper about the concept in 1931. [ 3 ] krylov subspaces are used in algorithms for finding approximate solutions to high - dimensionallinear algebra problems. [ 2 ] manylinear dynamical systemtests incontrol theory, especially those related tocontrollabilityandobservability, involve checking the rank of the krylov subspace. these tests are equivalent to finding the span of thegramiansassociated with the system / output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the krylov subspace. [ 4 ] moderniterative methodssuch asarnoldi iterationcan be used for finding one ( or a few ) eigenvalues of largesparse matricesor solving large systems of linear equations. they try to avoid matrix - matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. starting with a vectorb { \ displaystyle b }, one computesab { \ displaystyle ab }, then one multiplies that vector bya { \ displaystyle a } to finda2b { \ displaystyle a ^ { 2 } b } and so on. all algorithms that work this way are referred to as krylov subspace methods ; they are among the most successful methods currently available in numerical linear algebra. these methods can be used in situations where there is an algorithm to compute the matrix - vector multiplication without there being an explicit representation ofa { \ displaystyle a }, giving rise tomatrix - free methods. because the vectors usually soon become almostlinearly dependentdue to the properties ofpower iteration, methods relying on krylov subspace frequently involve someorthogonalizationscheme, such aslanczos iterationforhermitian matricesorarnoldi iterationfor more general matrices. the best known krylov subspace methods are theconjugate gradient, idr ( s ) ( induced dimension reduction ),
https://en.wikipedia.org/wiki/Krylov_subspace
on krylov subspace frequently involve someorthogonalizationscheme, such aslanczos iterationforhermitian matricesorarnoldi iterationfor more general matrices. the best known krylov subspace methods are theconjugate gradient, idr ( s ) ( induced dimension reduction ), gmres ( generalized minimum residual ), bicgstab ( biconjugate gradient stabilized ), qmr ( quasi minimal residual ), tfqmr ( transpose - free qmr ) andminres ( minimal residual method ).
https://en.wikipedia.org/wiki/Krylov_subspace
empiricalmethods prescriptiveand policy alimited dependent variableis a variable whose range of possible values is " restricted in some important way. " [ 1 ] ineconometrics, the term is often used when estimation of the relationship between thelimiteddependent variableof interest and other variables requires methods that take this restriction into account. for example, this may arise when the variable of interest is constrained to lie between zero and one, as in the case of aprobability, or is constrained to be positive, as in the case of wages or hours worked. limited dependent variablemodels include : [ 2 ] thiseconometrics - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Limited_dependent_variable
apersonal information manager ( often referred to as apim toolor, more simply, apim ) is a type of application software that functions as a personal organizer. the acronympimis now, more commonly, used in reference to personal information management as a field of study. [ 1 ] as an information management tool, a pim tool ' s purpose is to facilitate the recording, tracking, and management of certain types of " personal information ". personal information can include any of the following : [ 2 ] some pim / pdmsoftware products are capable of synchronizing data over acomputer network, includingmobile ad hoc networks ( manets ). this feature typically stores the personal data oncloud drivesallowing for continuous concurrent data updates / access, on the user ' s computers, includingdesktop computers, laptopcomputers, and mobile devices, such apersonal digital assistantsorsmartphones. ) [ 3 ] prior to the introduction of the term " personal digital assistant " ( " pda " ) by apple in 1992, handheld personal organizers such as thepsion organiserand thesharp wizardwere also referred to as " pims ". [ 4 ] [ 5 ] the time management and communications functions of pims largely migrated from pdas to smartphones, with apple, rim ( research in motion, nowblackberry ), and others all manufacturing smartphones that offer most of the functions of earlier pdas.
https://en.wikipedia.org/wiki/Personal_information_manager
indeep learning, pruningis the practice of removingparametersfrom an existingartificial neural network. [ 1 ] the goal of this process is to reduce the size ( parameter count ) of the neural network ( and therefore thecomputational resourcesrequired to run it ) whilst maintaining accuracy. this can be compared to the biological process ofsynaptic pruningwhich takes place inmammalianbrains during development. [ 2 ] a basic algorithm for pruning is as follows : [ 3 ] [ 4 ] most work on neural network pruning focuses on removing weights, namely, setting their values to zero. early work suggested to also change the values of non - pruned weights. [ 5 ] thisartificial intelligence - related article is astub. you can help wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network)
linotpis linux - based software to manage authentication devices fortwo - factor authenticationwithone time passwords. it is implemented as a web service based on the python frameworkpylons. thus it requires a web server to run in. linotp is mainly developed by the german company keyidentity gmbh. its core components are licensed under theaffero general public license. it is an open source authentication server certified [ 2 ] by theoath initiative for open authenticationfor its 2. 4 version. as a web service, linotp provides arest - like web api. [ 3 ] all functions can be accessed via pylons controllers. responses are returned as ajsonobject. linotp is designed in a modular way, enabling user store modules and token modules. thus, it is capable of supporting a wide range of different tokens. [ 4 ] linotp
https://en.wikipedia.org/wiki/LinOTP
inprobability theoryandstatistics, thezipf – mandelbrot lawis adiscrete probability distribution. also known as thepareto – zipf law, it is apower - lawdistribution onranked data, named after thelinguistgeorge kingsley zipf, who suggested a simpler distribution calledzipf ' s law, and the mathematicianbenoit mandelbrot, who subsequently generalized it. theprobability mass functionis given by wherehn, q, s { \ displaystyle h _ { n, q, s } } is given by which may be thought of as a generalization of aharmonic number. in the formula, k { \ displaystyle k } is the rank of the data, andq { \ displaystyle q } ands { \ displaystyle s } are parameters of the distribution. in the limit asn { \ displaystyle n } approaches infinity, this becomes thehurwitz zeta functionζ ( s, q ) { \ displaystyle \ zeta ( s, q ) }. for finiten { \ displaystyle n } andq = 0 { \ displaystyle q = 0 } the zipf – mandelbrot law becomeszipf ' s law. for infiniten { \ displaystyle n } andq = 0 { \ displaystyle q = 0 } it becomes azeta distribution. the distribution of words ranked by theirfrequencyin a randomtext corpusis approximated by apower - lawdistribution, known aszipf ' s law. if one plots the frequency rank of words contained in a moderately sized corpus of text data versus the number of occurrences or actual frequencies, one obtains apower - lawdistribution, withexponentclose to one ( but see powers, 1998 and gelbukh & sidorov, 2001 ). zipf ' s law implicitly assumes a fixed vocabulary size, but theharmonic serieswiths = 1 does not converge, while the zipf – mandelbrot generalization withs > 1 does. furthermore, there is evidence that the closed class of functional words that define a language obeys a zipf – mandelbrot distribution with different parameters from the open classes of contentive words that vary by topic, field and register. [ 1 ] in ecological field studies, therelative abundance distribution ( i. e. the graph of the number of species observed as a function of their abundance ) is often found to conform to
https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law
– mandelbrot distribution with different parameters from the open classes of contentive words that vary by topic, field and register. [ 1 ] in ecological field studies, therelative abundance distribution ( i. e. the graph of the number of species observed as a function of their abundance ) is often found to conform to a zipf – mandelbrot law. [ 2 ] within music, many metrics of measuring " pleasing " music conform to zipf – mandelbrot distributions. [ 3 ]
https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law
inmathematics, sendov ' s conjecture, sometimes also calledilieff ' s conjecture, concerns the relationship between the locations ofrootsandcritical pointsof apolynomial functionof acomplex variable. it is named afterblagovest sendov. theconjecturestates that for a polynomial with all rootsr1,..., rninside theclosed unit disk | z | ≤ 1, each of thenroots is at a distance no more than 1 from at least one critical point. thegauss – lucas theoremsays that all of the critical points lie within theconvex hullof the roots. it follows that the critical points must be within the unit disk, since the roots are. the conjecture has beenprovenforn < 9 by brown - xiang and fornsufficiently largebytao. [ 1 ] [ 2 ] the conjecture was first proposed byblagovest sendovin 1959 ; he described the conjecture to his colleaguenikola obreshkov. in 1967 the conjecture was misattributed [ 3 ] to ljubomir iliev bywalter hayman. [ 4 ] in 1969 meir and sharma proved the conjecture for polynomials withn < 6. in 1991 brown proved the conjecture forn < 7. borcea extended the proof ton < 8 in 1996. brown and xiang [ 5 ] proved the conjecture forn < 9 in 1999. terence taoproved the conjecture for sufficiently largenin 2020.
https://en.wikipedia.org/wiki/Sendov%27s_conjecture