text stringlengths 31 999 | source stringclasses 5 values |
|---|---|
Pull coding or client pull is a style of network communication where the initial request for data originates from the client, and then is responded to by the server. The reverse is known as push technology, where the server pushes data to clients.
Pull requests form the foundation of network computing, where many clients request data from centralized servers | https://huggingface.co/datasets/fmars/wiki_stem |
RadioVIS is a protocol for sideband signalling of images and text messages for a broadcast audio service to provide a richer visual experience. It is an application and sub-project of RadioDNS, which allows radio consumption devices to look up an IP-based service based on the parameters of the currently tuned broadcast station.
In January 2015, the functionality of RadioVIS was integrated to Visual Slideshow (ETSI TS 101 499 v3 | https://huggingface.co/datasets/fmars/wiki_stem |
Remote scripting is a technology which allows scripts and programs that are running inside a browser to exchange information with a server. The local scripts can invoke scripts on the remote side and process the returned information.
The earliest form of asynchronous remote scripting was developed before XMLHttpRequest existed, and made use of very simple process: a static web page opens a dynamic web page (e | https://huggingface.co/datasets/fmars/wiki_stem |
Server-Sent Events (SSE) is a server push technology enabling a client to receive automatic updates from a server via an HTTP connection, and describes how servers can initiate data transmission towards clients once an initial client connection has been established. They are commonly used to send message updates or continuous data streams to a browser client and designed to enhance native, cross-browser streaming through a JavaScript API called EventSource, through which a client requests a particular URL in order to receive an event stream. The EventSource API is standardized as part of HTML5 by the WHATWG | https://huggingface.co/datasets/fmars/wiki_stem |
SitePal is a speaking avatar platform for small and medium-sized businesses developed by Oddcast.
SitePal allows users to deploy "virtual employees" on websites that can welcome visitors, guide them around the site and answer questions.
The use of SitePal on commercial websites has been controversial because many visitors report finding them annoying | https://huggingface.co/datasets/fmars/wiki_stem |
In computing, a situational application is "good enough" software created for a narrow group of users with a unique set of needs. The application typically (but not always) has a short life span, and is often created within the group where it is used, sometimes by the users themselves. As the requirements of a small team using the application change, the situational application often also continues to evolve to accommodate these changes | https://huggingface.co/datasets/fmars/wiki_stem |
A static web page (sometimes called a flat page or a stationary page) is a web page that is delivered to the user's web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page often displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. However, a webpage's JavaScript can introduce dynamic functionality which may make the static web page dynamic | https://huggingface.co/datasets/fmars/wiki_stem |
A web style sheet is a form of separation of content and presentation for web design in which the markup (i. e. , HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style) | https://huggingface.co/datasets/fmars/wiki_stem |
Svelte is a free and open-source front-end component framework and language created by Rich Harris and maintained by the Svelte core team members. Svelte is not a monolithic JavaScript library imported by applications: instead, Svelte compiles HTML templates to specialized code that manipulates the DOM directly, which may reduce the size of transferred files and give better client performance. Application code is also processed by the compiler, inserting calls to automatically recompute data and re-render UI elements when the data they depend on is modified | https://huggingface.co/datasets/fmars/wiki_stem |
A virtual DOM is a lightweight JavaScript representation of the Document Object Model (DOM) used in declarative web frameworks such as React, Vue. js, and Elm. Updating the virtual DOM is comparatively faster than updating the actual DOM (via JavaScript) | https://huggingface.co/datasets/fmars/wiki_stem |
The W3C Device Description Working Group (DDWG), operating as part of the World Wide Web Consortium (W3C) Mobile Web Initiative (MWI), was chartered to "foster the provision and access to device descriptions that can be used in support of Web-enabled applications that provide an appropriate user experience on mobile devices. " Mobile devices exhibit the greatest diversity of capabilities, and therefore present the greatest challenge to content adaptation technologies. The group published several documents, including a list of requirements for an interface to a Device Description Repository (DDR) and a standard interface meeting those requirements | https://huggingface.co/datasets/fmars/wiki_stem |
A web API is an application programming interface (API) for either a web server or a web browser.
As a web development concept, it can be related to a web application's client side (including any web frameworks being used).
A server-side web API consists of one or more publicly exposed endpoints to a defined request–response message system, typically expressed in JSON or XML by means of an HTTP-based web server | https://huggingface.co/datasets/fmars/wiki_stem |
A web application (or web app) is application software that is accessed using a web browser. Web applications are delivered on the World Wide Web to users with an active network connection.
History
In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally | https://huggingface.co/datasets/fmars/wiki_stem |
Web content development is the process of researching, writing, gathering, organizing, and editing information for publication on websites. Website content may consist of prose, graphics, pictures, recordings, movies, or other digital assets that could be distributed by a hypertext transfer protocol server, and viewed by a web browser.
Content developers and web developers
When the World Wide Web began, web developers either developed online content themselves, or modified existing documents and coded them into hypertext markup language (HTML) | https://huggingface.co/datasets/fmars/wiki_stem |
A web developer is a programmer who develops World Wide Web applications using a client–server model. The applications typically use HTML, CSS, and JavaScript in the client, and any general-purpose programming language in the server. HTTP is used for communications between client and server | https://huggingface.co/datasets/fmars/wiki_stem |
Web development tools (often called devtools or inspect element) allow web developers to test and debug their source code. They are different from website builders and integrated development environments (IDEs) in that they do not assist in the direct creation of a webpage, rather they are tools used for testing the user interface of a website or web application.
Web development tools come as browser add-ons or built-in features in modern web browsers | https://huggingface.co/datasets/fmars/wiki_stem |
The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad hoc way, contributing to problems of usability, maintainability, quality and reliability | https://huggingface.co/datasets/fmars/wiki_stem |
Web performance refers to the speed in which web pages are downloaded and displayed on the user's web browser. Web performance optimization (WPO), or website optimization is the field of knowledge about increasing web performance.
Faster website download speeds have been shown to increase visitor retention and loyalty and user satisfaction, especially for users with slow internet connections and those on mobile devices | https://huggingface.co/datasets/fmars/wiki_stem |
Web syndication is making content available from one website to other sites. Most commonly, websites are made available to provide either summaries or full renditions of a website's recently added content. The term may also describe other kinds of content licensing for reuse | https://huggingface.co/datasets/fmars/wiki_stem |
A web worker, as defined by the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG), is a JavaScript script executed from an HTML page that runs in the background, independently of scripts that may also have been executed from the same HTML page. Web workers are often able to utilize multi-core CPUs more effectively. The W3C and WHATWG envision web workers as long-running scripts that are not interrupted by scripts that respond to clicks or other user interactions | https://huggingface.co/datasets/fmars/wiki_stem |
Web3D, also called 3D Web, is a group of technologies to display and navigate websites using 3D computer graphics.
Pre-WebGL era
The emergence of Web3D dates back to 1994, with the advent of VRML, a file format designed to store and display 3D graphical data on the World Wide Web. In October 1995, at Internet World, Template Graphics Software demonstrated a 3D/VRML plug-in for the beta release of Netscape 2 | https://huggingface.co/datasets/fmars/wiki_stem |
WebAR, previously known as the Augmented Web, is a web technology that allows for augmented reality functionality within a web browser. It is a combination of HTML5, Web Audio, WebGL, and WebRTC. From 2020s more known as web-based Augmented Reality or WebAR, which is about the use of augmented reality elements in browsers | https://huggingface.co/datasets/fmars/wiki_stem |
WebCL (Web Computing Language) is a JavaScript binding to OpenCL for heterogeneous parallel computing within any compatible web browser without the use of plug-ins, first announced in March 2011. It is developed on similar grounds as OpenCL and is considered as a browser version of the latter. Primarily, WebCL allows web applications to actualize speed with multi-core CPUs and GPUs | https://huggingface.co/datasets/fmars/wiki_stem |
A direct function (dfn, pronounced "dee fun") is an alternative way to define a function and operator (a higher-order function) in the programming language APL. A direct operator can also be called a dop (pronounced "dee op"). They were invented by John Scholes in 1996 | https://huggingface.co/datasets/fmars/wiki_stem |
Dynamic timing verification refers to verifying that an ASIC design is fast enough to run without errors at the targeted clock rate. This is accomplished by simulating the design files used to synthesize the integrated circuit (IC) design. This is in contrast to static timing analysis, which has a similar goal as dynamic timing verification except it does not require simulating the real functionality of the IC | https://huggingface.co/datasets/fmars/wiki_stem |
Extended static checking (ESC) is a collective name in computer science for a range of techniques for statically checking the correctness of various program constraints. ESC can be thought of as an extended form of type checking. As with type checking, ESC is performed automatically at compile time (i | https://huggingface.co/datasets/fmars/wiki_stem |
Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.
Equivalence checking and levels of abstraction
In general, there is a wide range of possible definitions of functional equivalence covering comparisons between different levels of abstraction and varying granularity of timing details.
The most common approach is to consider the problem of machine equivalence which defines two synchronous design specifications functionally equivalent if, clock by clock, they produce exactly the same sequence of output signals for any valid sequence of input signals | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science, formal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information | https://huggingface.co/datasets/fmars/wiki_stem |
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
The verification of these systems is done by providing a formal proof on an abstract mathematical model of the system, the correspondence between the mathematical model and the nature of the system being otherwise known by construction | https://huggingface.co/datasets/fmars/wiki_stem |
High-level and low-level, as technical terms, are used to classify, describe and point to specific goals of a systematic operation; and are applied in a wide range of contexts, such as, for instance, in domains as widely varied as computer science and business administration.
High-level describe those operations that are more abstract and general in nature; wherein the overall goals and systemic features are typically more concerned with the wider, macro system as a whole.
Low-level describes more specific individual components of a systematic operation, focusing on the details of rudimentary micro functions rather than macro, complex processes | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematical logic and computer science, homotopy type theory (HoTT ) refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.
This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language) for abstract homotopy theory and higher category theory; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants.
There is a large overlap between the work referred to as homotopy type theory, and as the univalent foundations project | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science, interference freedom is a technique for proving partial correctness of
concurrent programs with shared variables. Hoare logic had been introduced earlier
to prove correctness of sequential programs. In her PhD thesis (and papers arising from it ) under advisor David Gries, Susan Owicki
extended this work to apply to concurrent programs | https://huggingface.co/datasets/fmars/wiki_stem |
The International Conference on Software Engineering and Formal Methods (SEFM) is an international academic conference in the field of software engineering.
History
Until 2002, SEFM was a workshop; it then became a full international conference. It is sponsored by the IEEE Computer Society | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane | https://huggingface.co/datasets/fmars/wiki_stem |
Invariant-based programming is a programming methodology where specifications and invariants are written before the actual program statements. Writing down the invariants during the programming process has a number of advantages: it requires the programmer to make their intentions about the program behavior explicit before actually implementing it, and invariants can be evaluated dynamically during execution to catch common programming errors. Furthermore, if strong enough, invariants can be used to prove the correctness of the program based on the formal semantics of program statements | https://huggingface.co/datasets/fmars/wiki_stem |
Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics | https://huggingface.co/datasets/fmars/wiki_stem |
The Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called strong behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address titled Data abstraction and hierarchy. It is based on the concept of "substitutability" – a principle in object-oriented programming stating that an object (such as a class) may be replaced by a sub-object (such as a class that extends the first class) without breaking the program. It is a semantic rather than merely syntactic relation, because it intends to guarantee semantic interoperability of types in a hierarchy, object types in particular | https://huggingface.co/datasets/fmars/wiki_stem |
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
Theoretical foundations and analysis
Use of computer technology to aid logicians
Use of concepts from logic for computer applications
Theoretical foundations and analysis
Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science, a loop variant is a mathematical function defined on the state space of a computer program whose value is monotonically decreased with respect to a (strict) well-founded relation by the iteration of a while loop under some invariant conditions, thereby ensuring its termination. A loop variant whose range is restricted to the non-negative integers is also known as a bound function, because in this case it provides a trivial upper bound on the number of iterations of a loop before it terminates. However, a loop variant may be transfinite, and thus is not necessarily restricted to integer values | https://huggingface.co/datasets/fmars/wiki_stem |
Model-based specification is an approach to formal specification where the system specification is expressed as a system state model. This state model is constructed using well-understood mathematical entities such as sets and functions. System operations are specified by defining how they affect the state of the system model | https://huggingface.co/datasets/fmars/wiki_stem |
In programming language theory, the POPLmark challenge (from "Principles of Programming Languages benchmark", formerly Mechanized Metatheory for the Masses!) (Aydemir, 2005) is a set of benchmarks designed to evaluate the state of automated reasoning (or mechanization) in the metatheory of programming languages, and to stimulate discussion and collaboration among a diverse cross section of the formal methods community. Very loosely speaking, the challenge is about measurement of how well programs may be proven to match a specification of how they are intended to behave (and the many complex issues that this involves). The challenge was initially proposed by the members of the PL club at the University of Pennsylvania, in association with collaborators around the world | https://huggingface.co/datasets/fmars/wiki_stem |
In computer programming, a postcondition is a condition or predicate that must always be true just after the execution of some section of code or after an operation in a formal specification. Postconditions are sometimes tested using assertions within the code itself. Often, postconditions are simply included in the documentation of the affected section of code | https://huggingface.co/datasets/fmars/wiki_stem |
In computer programming, a precondition is a condition or predicate that must always be true just prior to the execution of some section of code or before an operation in a formal specification.
If a precondition is violated, the effect of the section of code becomes undefined and thus may or may not carry out its intended work. Security problems can arise due to incorrect preconditions | https://huggingface.co/datasets/fmars/wiki_stem |
Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics | https://huggingface.co/datasets/fmars/wiki_stem |
Predicative programming is the original name of a formal method for program specification and refinement, more recently called a Practical Theory of Programming, invented by Eric Hehner. The central idea is that each specification is a binary (boolean) expression that is true of acceptable computer behaviors and false of unacceptable behaviors. It follows that refinement is just implication | https://huggingface.co/datasets/fmars/wiki_stem |
Process performance qualification protocol is a component of process validation: process qualification. This step is vital in maintaining ongoing production quality by recording and having available for review essential conditions, controls, testing, and expected manufacturing outcome of a production process. The Food and Drug Administration recommends the following criteria be included in a PPQ protocol:
Manufacturing conditions: Operating parameters, equipment limits, and component inputs
What data should be recorded and analyzed
What tests should be performed to ensure quality at each production step
A sampling plan to outline sampling methods both during and between production batches
Analysis methodology that allows for data scientific and risk oriented decision making based on statistical data | https://huggingface.co/datasets/fmars/wiki_stem |
Process qualification is the qualification of manufacturing and production processes to confirm they are able to operate at a certain standard during sustained commercial manufacturing. Data covering critical process parameters must be recorded and analyzed to ensure critical quality attributes can be guaranteed throughout production. This may include testing equipment at maximum operating capacity to show quantity demands can be met | https://huggingface.co/datasets/fmars/wiki_stem |
Promise Theory is a method of analysis suitable for studying any system of interacting components. In the context of information science, Promise Theory offers a methodology for organising and understanding complex systems by modelling voluntary cooperation between individual actors or agents, which make public their 'intentions' to one another in the form of promises. Promise Theory has a philosophical grounding and a mathematical formulation rooted in graph theory and set theory | https://huggingface.co/datasets/fmars/wiki_stem |
Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i | https://huggingface.co/datasets/fmars/wiki_stem |
The QED manifesto was a proposal for a computer-based database of all mathematical knowledge, strictly formalized and with all proofs having been checked automatically. (Q. E | https://huggingface.co/datasets/fmars/wiki_stem |
rCOS stands for refinement of object and component systems. It is a formal method providing component-based model-driven software development.
Overview
rCOS was originally developed by He Jifeng, Zhiming Liu and Xiaoshan Li at UNU-IIST in Macau, and consists of a unified multi-view modeling notation with a theory of relational semantic and graph-based operational semantics, a refinement calculus and tool support for model construction, model analysis and verification, and model transformations | https://huggingface.co/datasets/fmars/wiki_stem |
The refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable "program", which is then refined by a series of correctness-preserving transformations into an efficiently executable program. Proponents include Ralph-Johan Back, who originated the approach in his 1978 PhD thesis On the Correctness of Refinement Steps in Program Development, and Carroll Morgan, especially with his book Programming from Specifications (Prentice Hall, 2nd edition, 1994, ISBN 0-13-123274-6) | https://huggingface.co/datasets/fmars/wiki_stem |
Regulated rewriting is a specific area of formal languages studying grammatical systems which are able to take some kind of control over the production applied in a derivation step. For this reason, the grammatical systems studied in Regulated Rewriting theory are also called "Grammars with Controlled Derivations". Among such grammars can be noticed:
Matrix Grammars
Basic concepts
Definition
A Matrix Grammar,
M
G
{\displaystyle MG}
, is a four-tuple
G
=
(
N
,
T
,
M
,
S
)
{\displaystyle G=(N,T,M,S)}
where
1 | https://huggingface.co/datasets/fmars/wiki_stem |
Retiming is the technique of moving the structural location of latches or registers in a digital circuit to improve its performance, area, and/or power characteristics in such a way that preserves its functional behavior at its outputs. Retiming was first described by Charles E. Leiserson and James B | https://huggingface.co/datasets/fmars/wiki_stem |
Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science and formal methods, a SAT solver is a computer program which aims to solve the Boolean satisfiability problem. On input a formula over Boolean variables, such as "(x or y) and (x or not y)", a SAT solver outputs whether the formula is satisfiable, meaning that there are possible values of x and y which make the formula true, or unsatisfiable, meaning that there are no such values of x and y. In this case, the formula is satisfiable when x is true, so the solver should return "satisfiable" | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality (often disallowing quantifiers) | https://huggingface.co/datasets/fmars/wiki_stem |
Semantic spacetime is a theoretical framework for agent-based modelling of spacetime, based on Promise Theory. It is relevant both as a model of computer science and as an alternative network based formulation of physics in some areas.
Semantic Spacetime was introduced by physicist and computer scientist Mark Burgess, in a series of papers called Spacetimes with Semantics, as a practical alternative to describing space and time, initially for Computer Science | https://huggingface.co/datasets/fmars/wiki_stem |
In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax. It is closely related to, and often crosses over with, the semantics of mathematical proofs | https://huggingface.co/datasets/fmars/wiki_stem |
SIGNAL is a programming language based on synchronized data-flow (flows + synchronization): a process is a set of equations on elementary flows describing both data and control. The SIGNAL formal model provides the capability to describe systems with several clocks (polychronous systems) as relational specifications. Relations are useful as partial specifications and as specifications of non-deterministic devices (for instance a non-deterministic bus) or external processes (for instance an unsafe car driver) | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science and computer programming, a function f is said to be strict if, when applied to a non-terminating expression, it also fails to terminate. A strict function in the denotational semantics of programming languages is a function f where
f
(
⊥
)
=⊥
{\displaystyle f\left(\perp \right)=\perp }
. The entity
⊥
{\displaystyle \perp }
, called bottom, denotes an expression that does not return a normal value, either because it loops endlessly or because it aborts due to an error such as division by zero | https://huggingface.co/datasets/fmars/wiki_stem |
In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation | https://huggingface.co/datasets/fmars/wiki_stem |
In software engineering, syntactic methods are techniques for developing correct software programs. The techniques attempt to detect, and thus prevent, certain kinds of defects (bugs) by examining the structure of the code being produced at its syntactic rather than semantic level.
Usage
Syntactic methods are often used when formal methods are not an option, and are often a simpler and, more importantly, cheaper alternative | https://huggingface.co/datasets/fmars/wiki_stem |
UML state machine, also known as UML statechart, is an extension of the mathematical concept of a finite automaton in computer science applications as expressed in the Unified Modeling Language (UML) notation.
The concepts behind it are about organizing the way a device, computer program, or other (often technical) process works such that an entity or each of its sub-entities is always in exactly one of a number of possible states and where there are well-defined conditional transitions between these states.
UML state machine is an object-based variant of Harel statechart, adapted and extended by UML | https://huggingface.co/datasets/fmars/wiki_stem |
In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle | https://huggingface.co/datasets/fmars/wiki_stem |
Verification and validation of computer simulation models is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model. "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are "correct" | https://huggingface.co/datasets/fmars/wiki_stem |
A verification condition generator is a common sub-component of an automated program verifier that synthesizes formal verification conditions by analyzing a program's source code using a method based upon Hoare logic. VC generators may require that the source code contains logical annotations provided by the programmer or the compiler such as pre/post-conditions and loop invariants (a form of proof-carrying code). VC generators are often coupled with SMT solvers in the backend of a program verifier | https://huggingface.co/datasets/fmars/wiki_stem |
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematical optimization, the Ackley function is a non-convex function used as a performance test problem for optimization algorithms. It was proposed by David Ackley in his 1987 PhD dissertation. On a 2-dimensional domain it is defined by:
f
(
x
,
y
)
=
−
20
exp
[
−
0 | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on
{
0
,
1
}
n
{\displaystyle \{0,1\}^{n}}
or
{
−
1
,
1
}
n
{\displaystyle \{-1,1\}^{n}}
(such functions are sometimes known as pseudo-Boolean functions) from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning | https://huggingface.co/datasets/fmars/wiki_stem |
Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models | https://huggingface.co/datasets/fmars/wiki_stem |
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known.
The method involves starting with a relatively large estimate of the step size for movement along the line search direction, and iteratively shrinking the step size (i | https://huggingface.co/datasets/fmars/wiki_stem |
Basis pursuit is the mathematical optimization problem of the form
min
x
‖
x
‖
1
subject to
y
=
A
x
,
{\displaystyle \min _{x}\|x\|_{1}\quad {\text{subject to}}\quad y=Ax,}
where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transform matrix (usually measurement matrix) and M < N.
It is usually applied in cases where there is an underdetermined system of linear equations y = Ax that must be exactly satisfied, and the sparsest solution in the L1 sense is desired.
When it is desirable to trade off exact equality of Ax and y in exchange for a sparser x, basis pursuit denoising is preferred | https://huggingface.co/datasets/fmars/wiki_stem |
Bayesian efficiency is an analog of Pareto efficiency for situations in which there is incomplete information. Under Pareto efficiency, an allocation of a resource is Pareto efficient if there is no other allocation of that resource that makes no one worse off while making some agents strictly better off. A limitation with the concept of Pareto efficiency is that it assumes that knowledge about other market participants is available to all participants, in that every player knows the payoffs and strategies available to other players so as to have complete information | https://huggingface.co/datasets/fmars/wiki_stem |
Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables | https://huggingface.co/datasets/fmars/wiki_stem |
A binary constraint, in mathematical optimization, is a constraint that involves exactly two variables.
For example, consider the n-queens problem, where the goal is to place n chess queens on an n-by-n chessboard such that none of the queens can attack each other (horizontally, vertically, or diagonally). The formal set of constraints are therefore "Queen 1 can't attack Queen 2", "Queen 1 can't attack Queen 3", and so on between all pairs of queens | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the cake number, denoted by Cn, is the maximum of the number of regions into which a 3-dimensional cube can be partitioned by exactly n planes. The cake number is so-called because one may imagine each partition of the cube by a plane as a slice made by a knife through a cube-shaped cake. It is the 3D analogue of the lazy caterer's sequence | https://huggingface.co/datasets/fmars/wiki_stem |
In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to use a complete three-level factorial experiment.
After the designed experiment is performed, linear regression is used, sometimes iteratively, to obtain results. Coded variables are often used when constructing this design | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, Clarke's generalized Jacobian is a generalization of the Jacobian matrix of a smooth function to non-smooth functions. It was introduced by Clarke (1983).
References
Clarke, Frank H | https://huggingface.co/datasets/fmars/wiki_stem |
A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a function of two vector variables subject to certain requirements (constraints) which include: that the inner product of the two vectors must equal zero, i. e | https://huggingface.co/datasets/fmars/wiki_stem |
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set | https://huggingface.co/datasets/fmars/wiki_stem |
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics and economics, a corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is when the chooser is either unwilling or unable to make a trade-off between goods.
In economics
In the context of economics the corner solution is best characterised by when the highest indifference curve attainable is not tangential to the budget line, in this scenario the consumer puts their entire budget into purchasing as much of one of the goods as possible and none of any other | https://huggingface.co/datasets/fmars/wiki_stem |
The dead-end elimination algorithm (DEE) is a method for minimizing a function over a discrete set of independent variables. The basic idea is to identify "dead ends", i. e | https://huggingface.co/datasets/fmars/wiki_stem |
Deterministic global optimization is a branch of numerical optimization which focuses on finding the global solutions of an optimization problem whilst providing theoretical guarantees that the reported solution is indeed the global one, within some predefined tolerance. The term "deterministic global optimization" typically refers to complete or rigorous (see below) optimization methods. Rigorous methods converge to the global optimum in finite time | https://huggingface.co/datasets/fmars/wiki_stem |
Discrete optimization is a branch of optimization in applied mathematics and computer science.
Scope
As opposed to continuous optimization, some or all of the variables used in a discrete mathematical program are restricted to be discrete variables—that is, to assume only a discrete set of values, such as the integers.
Branches
Three notable branches of discrete optimization are:
combinatorial optimization, which refers to problems on graphs, matroids and other discrete structures
integer programming
constraint programmingThese branches are all closely intertwined however since many combinatorial optimization problems
can be modeled as integer programs (e | https://huggingface.co/datasets/fmars/wiki_stem |
Dispersive flies optimisation (DFO) is a bare-bones swarm intelligence algorithm which is inspired by the swarming behaviour of flies hovering over food sources. DFO is a simple optimiser which works by iteratively trying to improve a candidate solution with regard to a numerical measure that is calculated by a fitness function. Each member of the population, a fly or an agent, holds a candidate solution whose suitability can be evaluated by their fitness value | https://huggingface.co/datasets/fmars/wiki_stem |
Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized.
Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents) | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, a knee of a curve (or elbow of a curve) is a point where the curve visibly bends, specifically from high slope to low slope (flat or close to flat), or in the other direction. This is particularly used in optimization, where a knee point is the optimum point for some decision, for example when there is an increasing function and a trade-off between the benefit (vertical y axis) and the cost (horizontal x axis): the knee is where the benefit is no longer increasing rapidly, and is no longer worth the cost of further increases – a cutoff point of diminishing returns.
In heuristic use, the term may be used informally, and a knee point identified visually, but in more formal use an explicit objective function is used, and depends on the particular optimization problem | https://huggingface.co/datasets/fmars/wiki_stem |
Predictive genomics is at the intersection of multiple disciplines: predictive medicine, personal genomics and translational bioinformatics. Specifically, predictive genomics deals with the future phenotypic outcomes via prediction in areas such as complex multifactorial diseases in humans. To date, the success of predictive genomics has been dependent on the genetic framework underlying these applications, typically explored in genome-wide association (GWA) studies | https://huggingface.co/datasets/fmars/wiki_stem |
The Protein Information Resource (PIR), located at Georgetown University Medical Center, is an integrated public bioinformatics resource to support genomic and proteomic research, and scientific studies. It contains protein sequences databases
History
PIR was established in 1984 by the National Biomedical Research Foundation as a resource to assist researchers and customers in the identification and interpretation of protein sequence information. Prior to that, the foundation compiled the first comprehensive collection of macromolecular sequences in the Atlas of Protein Sequence and Structure, published from 1964 to 1974 under the editorship of Margaret Dayhoff | https://huggingface.co/datasets/fmars/wiki_stem |
Proteogenomics is a field of biological research that utilizes a combination of proteomics, genomics, and transcriptomics to aid in the discovery and identification of peptides. Proteogenomics is used to identify new peptides by comparing MS/MS spectra against a protein database that has been derived from genomic and transcriptomic information. Proteogenomics often refers to studies that use proteomic information, often derived from mass spectrometry, to improve gene annotations | https://huggingface.co/datasets/fmars/wiki_stem |
Proteomics is the large-scale study of proteins. Proteins are vital parts of living organisms, with many functions such as the formation of structural fibers of muscle tissue, enzymatic digestion of food, or synthesis and replication of DNA. In addition, other kinds of proteins include antibodies that protect an organism from infection, and hormones that send important signals throughout the body | https://huggingface.co/datasets/fmars/wiki_stem |
Psychosocial genomics (PG) is a field of research first proposed by Ernest L. Rossi in 2002. PG examines the modulation of gene expression in response to psychological, social and cultural experiences | https://huggingface.co/datasets/fmars/wiki_stem |
The term radiogenomics is used in two contexts: either to refer to the study of genetic variation associated with response to radiation (radiation genomics) or to refer to the correlation between cancer imaging features and gene expression (imaging genomics).
Radiation genomics
In radiation genomics, radiogenomics is used to refer to the study of genetic variation associated with response to radiation therapy. Genetic variation, such as single nucleotide polymorphisms, is studied in relation to a cancer patient's risk of developing toxicity following radiation therapy | https://huggingface.co/datasets/fmars/wiki_stem |
Reduced representation bisulfite sequencing (RRBS) is an efficient and high-throughput technique for analyzing the genome-wide methylation profiles on a single nucleotide level. It combines restriction enzymes and bisulfite sequencing to enrich for areas of the genome with a high CpG content. Due to the high cost and depth of sequencing to analyze methylation status in the entire genome, Meissner et al | https://huggingface.co/datasets/fmars/wiki_stem |
A reference genome (also known as a reference assembly) is a digital nucleic acid sequence database, assembled by scientists as a representative example of the set of genes in one idealized individual organism of a species. As they are assembled from the sequencing of DNA from a number of individual donors, reference genomes do not accurately represent the set of genes of any single individual organism. Instead a reference provides a haploid mosaic of different DNA sequences from each donor | https://huggingface.co/datasets/fmars/wiki_stem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.