text
stringlengths
31
999
source
stringclasses
5 values
A specification language is a formal language in computer science used during systems analysis, requirements analysis, and systems design to describe a system at a much higher level than a programming language, which is used to produce the executable code for a system. Overview Specification languages are generally not directly executed. They are meant to describe the what, not the how
https://huggingface.co/datasets/fmars/wiki_stem
Algebraic modeling languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i. e. large scale optimization type problems)
https://huggingface.co/datasets/fmars/wiki_stem
The ANSI/ISO C Specification Language (ACSL) is a specification language for C programs, using Hoare style pre- and postconditions and invariants, that follows the design by contract paradigm. Specifications are written as C annotation comments to the C program, which hence can be compiled with any C compiler. The current verification tool for ACSL is Frama-C
https://huggingface.co/datasets/fmars/wiki_stem
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model. Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2
https://huggingface.co/datasets/fmars/wiki_stem
CMS Pipelines is a feature of the VM/CMS operating system that allows the user to create and use a pipeline. The programs in a pipeline operate on a sequential stream of records. A program writes records that are read by the next program in the pipeline
https://huggingface.co/datasets/fmars/wiki_stem
The CO-OPN (Concurrent Object-Oriented Petri Nets) specification language is based on both algebraic specifications and algebraic Petri nets formalisms. The former formalism represent the data structures aspects, while the latter stands for the behavioral and concurrent aspects of systems. In order to deal with large specifications some structuring capabilities have been introduced
https://huggingface.co/datasets/fmars/wiki_stem
ERIL (Entity-Relationship and Inheritance Language) is a visual language for representing the data structure of a computer system. As its name suggests, ERIL is based on entity-relationship diagrams and class diagrams. ERIL combines the relational and object-oriented approaches to data modeling
https://huggingface.co/datasets/fmars/wiki_stem
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices
https://huggingface.co/datasets/fmars/wiki_stem
Franca Interface Definition Language (Franca IDL) is a formally defined, text-based interface description language. It is part of the Franca framework, which is a framework for definition and transformation of software interfaces. Franca applies model transformation techniques to interoperate with various interface description languages (e
https://huggingface.co/datasets/fmars/wiki_stem
Fundamental modeling concepts (FMC) provide a framework to describe software-intensive systems. It strongly emphasizes the communication about software-intensive systems by using a semi-formal graphical notation that can easily be understood. Introduction FMC distinguishes three perspectives to look at a software system: Structure of the system Processes in the system Value domains of the systemFMC defines a dedicated diagram type for each perspective
https://huggingface.co/datasets/fmars/wiki_stem
HOOD (Hierarchic Object-Oriented Design) is a detailed software design method. It is based on hierarchical decomposition of a software problem. It comprises textual and graphical representations of the design
https://huggingface.co/datasets/fmars/wiki_stem
i* (pronounced "i star") or i* framework is a modeling language suitable for an early phase of system modeling in order to understand the problem domain. i* modeling language allows to model both as-is and to-be situations. The name i* refers to the notion of distributed intentionality which underlines the framework
https://huggingface.co/datasets/fmars/wiki_stem
The Interaction Flow Modeling Language (IFML) is a standardized modeling language in the field of software engineering. IFML includes a set of graphic notations to create visual models of user interactions and front-end behavior in software systems. The Interaction Flow Modeling Language was developed in 2012 and 2013 under the lead of WebRatio and was inspired by the WebML notation, as well as by a few other experiences in the Web modeling field
https://huggingface.co/datasets/fmars/wiki_stem
An interface description language or interface definition language (IDL), is a generic term for a language that lets a program or object written in one language communicate with another program written in an unknown language. IDLs describe an interface in a language-independent way, enabling communication between software components that do not share one language, for example, between those written in C++ and those written in Java. IDLs are commonly used in remote procedure call software
https://huggingface.co/datasets/fmars/wiki_stem
LISA (Language for Instruction Set Architectures) is a language to describe the instruction set architecture of a processor. LISA captures the information required to generate software tools (compiler, assembler, instruction set simulator, .
https://huggingface.co/datasets/fmars/wiki_stem
Little b is a domain-specific programming language, more specifically, a modeling language, designed to build modular mathematical models of biological systems. It was designed and authored by Aneil Mallavarapu. Little b is being developed in the Virtual Cell Program at Harvard Medical School, headed by mathematician Jeremy Gunawardena
https://huggingface.co/datasets/fmars/wiki_stem
A man–machine language (MML) is a specification language. MMLs are typically defined to standardize the interfaces for managing a telecommunications or network device from a console. ITU-T Z
https://huggingface.co/datasets/fmars/wiki_stem
A model transformation language in systems and software engineering is a language intended specifically for model transformation. Overview The notion of model transformation is central to model-driven development. A model transformation, which is essentially a program which operates on models, can be written in a general-purpose programming language, such as Java
https://huggingface.co/datasets/fmars/wiki_stem
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure Programing language. Overview A modeling language can be graphical or textual
https://huggingface.co/datasets/fmars/wiki_stem
Topology and Orchestration Specification for Cloud Applications (TOSCA), is an OASIS standard language to describe a topology of cloud based web services, their components, relationships, and the processes that manage them. The TOSCA standard includes specifications of a file archive format called CSAR. History On 16 January 2014, OASIS TOSCA Technical Committee approved TOSCA 1
https://huggingface.co/datasets/fmars/wiki_stem
Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems, specified as ISO/PAS 19450. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. OPM was conceived and developed by Dov Dori
https://huggingface.co/datasets/fmars/wiki_stem
Ontology Grounded Metalanguage (OGML) is a metalanguage like MOF. The goal of OGML is to tackle the difficulties of MOF: linear modeling architecture, ambiguous constructs and incomprehensible/unclear architecture. OGML provides a nested modeling architecture with three fixed layers (models, languages and metalanguage)
https://huggingface.co/datasets/fmars/wiki_stem
PetriScript is a modeling language for Petri nets, designed by Alexandre Hamez and Xavier Renault. The CPN-AMI platform provides many tools to work on Petri nets, such as verifying and model-checking tools. Originally, simple Petri nets were created through graphic design, but research conducted internally at LIP6 revealed that it was needed to automate such tasks
https://huggingface.co/datasets/fmars/wiki_stem
Tefkat is a model transformation language and a model transformation engine. The language is based on F-logic and the theory of stratified logic programs. The engine is an Eclipse plug-in for the Eclipse Modeling Framework (EMF)
https://huggingface.co/datasets/fmars/wiki_stem
TLA+ is a formal specification language developed by Leslie Lamport. It is used for designing, modelling, documentation, and verification of programs, especially concurrent systems and distributed systems. TLA+ is considered to be exhaustively-testable pseudocode, and its use likened to drawing blueprints for software systems; TLA is an acronym for Temporal Logic of Actions
https://huggingface.co/datasets/fmars/wiki_stem
In mathematical logic, an uninterpreted function or function symbol is one that has no other property than its name and n-ary form. Function symbols are used, together with constants and variables, to form terms. The theory of uninterpreted functions is also sometimes called the free theory, because it is freely generated, and thus a free object, or the empty theory, being the theory having an empty set of sentences (in analogy to an initial algebra)
https://huggingface.co/datasets/fmars/wiki_stem
Web IDL is an interface description language (IDL) format for describing application programming interfaces (APIs) that are intended to be implemented in web browsers. Its adoption was motivated by the desire to improve the interoperability of web programming interfaces by specifying how languages such as ECMAScript should bind these interfaces. Description Web IDL is an IDL variant with: A number of features that allow one to more easily describe the behavior of common script objects in a web context
https://huggingface.co/datasets/fmars/wiki_stem
WebML (Web Modeling Language) is a visual notation and a methodology for designing complex data-intensive Web applications. It provides graphical, yet formal, specifications, embodied in a complete design process, which can be assisted by visual design tools. In 2013 WebML has been extended to cover a wider spectrum of front-end interfaces, thus resulting in the Interaction Flow Modeling Language (IFML), adopted as a standard by the Object Management Group (OMG)
https://huggingface.co/datasets/fmars/wiki_stem
In library and information science, cataloging (US) or cataloguing (UK) is the process of creating metadata representing information resources, such as books, sound recordings, moving images, etc. Cataloging provides information such as author's names, titles, and subject terms that describe resources, typically through the creation of bibliographic records. The records serve as surrogates for the stored information resources
https://huggingface.co/datasets/fmars/wiki_stem
The Bibliotheca Hagiographica Latina (BHL) is a catalogue of Latin hagiographic materials, including ancient literary works on the saints' lives, the translations of their relics, and their miracles, arranged alphabetically by saint. The listings include manuscripts, incipits, and printed editions. The first edition (1898-1901) and supplement (1911) were edited by the Bollandists, which included the Jesuit scholar Hippolyte Delehaye
https://huggingface.co/datasets/fmars/wiki_stem
The Bibliotheca Hagiographica Orientalis is a catalogue of Arabic, Coptic, Syriac, Armenian, and Ethiopian hagiographic materials, including ancient literary works on the saints' lives, the translations of their relics, and their miracles, arranged alphabetically by saint. It is usually abbreviated as BHO in scholarly literature. The listings include MSS, incipits, and printed editions
https://huggingface.co/datasets/fmars/wiki_stem
Catalogue of Works in Refutation of Methodism: from its Origin in 1729, to the Present Time (often referred to as Catalogue of Works in Refutation of Methodism) is the title of an antiquarian bibliography or catalogue first published in America in 1846 by the 19th century author Curtis H. Cavender, who compiled the work under the anagrammatic pen name of H. C
https://huggingface.co/datasets/fmars/wiki_stem
EPSG Geodetic Parameter Dataset (also EPSG registry) is a public registry of geodetic datums, spatial reference systems, Earth ellipsoids, coordinate transformations and related units of measurement, originated by a member of the European Petroleum Survey Group (EPSG) in 1985. Each entity is assigned an EPSG code between 1024 and 32767, along with a standard machine-readable well-known text (WKT) representation. The dataset is maintained by the IOGP Geomatics Committee
https://huggingface.co/datasets/fmars/wiki_stem
The Official Marvel Index is a series of comic books released by Marvel Comics which featured synopses of several Marvel series. The books were largely compiled by George Olshevsky (who was for fourteen years the sole owner of a complete collection of Marvel superhero comics dating from Marvel Comics #1, published in 1939), and featured detailed information on each issue in a particular series, including writer and artist credits, characters who appeared in the issue, and a story synopsis. A similar series of indices was published for DC Comics
https://huggingface.co/datasets/fmars/wiki_stem
The Shaker Quarterly was a periodical published by the Sabbathday Lake Shaker Village from 1961 to 1996. It served as a journal and newsletter about the Shakers, and at times also doubled as a mail order catalog advertising products created by the Shaker community at Sabbathday Lake. It was the first regular Shaker publication since the Manifesto ceased publication in 1899
https://huggingface.co/datasets/fmars/wiki_stem
The Short Title Catalogus Flanders (STCV) is an online retrospective bibliography of books that were printed prior to 1801 within the current boundaries of Flanders (including Brussels). The project is executed by the Flanders Heritage Library network. Given the large scope, the bibliography is created step by step
https://huggingface.co/datasets/fmars/wiki_stem
A taxonomic database is a database created to hold information on biological taxa – for example groups of organisms organized by species name or other taxonomic identifier – for efficient data management and information retrieval. Taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas, both for print publication and online; to underpin the operation of web-based species information systems; as a part of biological collection management (for example in museums and herbaria); as well as providing, in some cases, the taxon management component of broader science or biology information systems. They are also a fundamental contribution to the discipline of biodiversity informatics
https://huggingface.co/datasets/fmars/wiki_stem
Trade catalogs, originating in the late sixteenth and early seventeenth century primarily in Europe, are printed pages that, prior to the 1800s, advertised products and ideas through words, illustrations, or both. These contained several types of items, ranging anywhere from decor, ironwork, furniture, and kitchenware. If a trade catalog included illustrations, the items were commonly engraved or hand-drawn, then replicated
https://huggingface.co/datasets/fmars/wiki_stem
An index (PL: usually indexes, more rarely indices; see below) is a list of words or phrases ('headings') and associated pointers ('locators') to where useful material relating to that heading can be found in a document or collection of documents. Examples are an index in the back matter of a book and an index that serves as a library catalog. An index differs from a word index, or concordance, in focusing on the subject of the text rather than the exact words in a text, and it differs from a table of contents because the index is ordered by subject, regardless of whether it is early or late in the book, while the listed items in a table of contents is placed in the same order as the book
https://huggingface.co/datasets/fmars/wiki_stem
In libraries, art galleries, museums and archives, an accession number is a unique identifier assigned to, and achieving initial control of, each acquisition. Assignment of accession numbers typically occurs at the point of accessioning or cataloging. The term is something of a misnomer, because the form accession numbers take is often alpha-numeric
https://huggingface.co/datasets/fmars/wiki_stem
An Archival Resource Key (ARK) is a multi-purpose URL suited to being a persistent identifier for information objects of any type. It is widely used by libraries, data centers, archives, museums, publishers, and government agencies to provide reliable references to scholarly, scientific, and cultural objects. In 2019 it was registered as a Uniform Resource Identifier (URI)
https://huggingface.co/datasets/fmars/wiki_stem
Automatic indexing is the computerized process of scanning large volumes of documents against a controlled vocabulary, taxonomy, thesaurus or ontology and using those controlled terms to quickly and effectively index large electronic document depositories. These keywords or language are applied by training a system on the rules that determine what words to match. There are additional parts to this such as syntax, usage, proximity, and other algorithms based on the system and what is required for indexing
https://huggingface.co/datasets/fmars/wiki_stem
Concept-based image indexing, also variably named as "description-based" or "text-based" image indexing/retrieval, refers to retrieval from text-based indexing of images that may employ keywords, subject headings, captions, or natural language text (Chen & Rasmussen, 1999). It is opposed to Content-based image retrieval. Indexing is a technique used in CBIR
https://huggingface.co/datasets/fmars/wiki_stem
A digital object identifier (DOI) is a persistent identifier or handle used to uniquely identify various objects, standardized by the International Organization for Standardization (ISO). DOIs are an implementation of the Handle System; they also fit within the URI system (Uniform Resource Identifier). They are widely used to identify academic, professional, and government information, such as journal articles, research reports, data sets, and official publications
https://huggingface.co/datasets/fmars/wiki_stem
Index maps are a type of finding aid that enables users to find a set of maps covering their regions of interest along with the name or number of the relevant map sheet. An index map provides geospatial data on either a sheet of paper or a computer screen. In this way, a map acts as a kind of gazetteer, with the location (such as a call number) represented within a grid overlaying the map's surface
https://huggingface.co/datasets/fmars/wiki_stem
The International Protein Index (IPI) is a defunct protein database launched in 2001 by the European Bioinformatics Institute (EBI), and closed in 2011. Its purpose was to provide the proteomics community with a resource that enables accession numbers from a variety of bioinformatics databases to be mapped a complete set of proteins for a species i. e
https://huggingface.co/datasets/fmars/wiki_stem
Key Word In Context (KWIC) is the most common format for concordance lines. The term KWIC was first coined by Hans Peter Luhn. The system was based on a concept called keyword in titles which was first proposed for Manchester libraries in 1864 by Andrea Crestadoro
https://huggingface.co/datasets/fmars/wiki_stem
Life Science Identifiers are a way to name and locate pieces of information on the web. Essentially, an LSID is a unique identifier for some data, and the LSID protocol specifies a standard way to locate the data (as well as a standard way of describing that data). They are a little like DOIs used by many publishers
https://huggingface.co/datasets/fmars/wiki_stem
MakeIndex is a computer program which provides a sorted index from unsorted raw data. MakeIndex can process raw data output by various programs, however, it is generally used with LaTeX and troff. MakeIndex was written around the year 1986 by Pehong Chen in the C programming language and is free software
https://huggingface.co/datasets/fmars/wiki_stem
The Navigational Aids for the History of Science, Technology, and the Environment Project (NAHSTE) was a research archives/manuscripts cataloguing project based at the University of Edinburgh. Following a proposal led by Arnott Wilson in 1999, the project received £261,755 funding from the Research Support Libraries Programme (RSLP) from 2000 until 2002. The project was designed to access a variety of outstanding collections of archives and manuscripts held at the three partner Higher Education Institutions (HEIs); the University of Edinburgh, University of Glasgow and Heriot-Watt University and to make them accessible on the Internet
https://huggingface.co/datasets/fmars/wiki_stem
The Registry of Research Data Repositories (re3data. org) is an open science tool that offers researchers, funding organizations, libraries and publishers an overview of existing international repositories for research data. Background re3data
https://huggingface.co/datasets/fmars/wiki_stem
SciCrunch is a collaboratively edited knowledge base about scientific resources. It is a community portal for researchers and a content management system for data and databases. It is intended to provide a common source of data to the research community and the data about Research Resource Identifiers (RRIDs), which can be used in scientific publications
https://huggingface.co/datasets/fmars/wiki_stem
Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing
https://huggingface.co/datasets/fmars/wiki_stem
The Society of Indexers (SI) is a professional society of indexers based in the UK, with its offices in Sheffield, England, but has members worldwide. The society was established in 1957, while its quarterly journal, The Indexer has been published since 1958. History The Society of Indexers was formally constituted at the premises of the National Book League in the UK on 30 March 1957 by G
https://huggingface.co/datasets/fmars/wiki_stem
Subject indexing is the act of describing or classifying a document by index terms, keywords, or other symbols in order to indicate what different documents are about, to summarize their contents or to increase findability. In other words, it is about identifying and describing the subject of documents. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents (such as books and articles) within a field of knowledge
https://huggingface.co/datasets/fmars/wiki_stem
A table of contents, usually headed simply Contents and abbreviated informally as TOC, is a list, usually found on a page before the start of a written work, of its chapter or section titles or brief descriptions with their commencing page numbers. History Pliny the Elder credits Quintus Valerius Soranus (d. 82 B
https://huggingface.co/datasets/fmars/wiki_stem
The tag URI scheme is a uniform resource identifier (URI) scheme for unique identifiers called tags, defined by RFC 4151 in October 2005. The RFC identifies four requirements for tags: Identifiers are likely to be unique across space and time, and come from a practically inexhaustible supply. Identifiers are relatively convenient for humans to mint (create), read, type, remember etc
https://huggingface.co/datasets/fmars/wiki_stem
Web indexing, or internet indexing, comprises methods for indexing the contents of a website or of the Internet as a whole. Individual websites or intranets may use a back-of-the-book index, while search engines usually use keywords and metadata to provide a more useful vocabulary for Internet or onsite searching. With the increase in the number of periodicals that have articles online, web indexing is also becoming important for periodical websites
https://huggingface.co/datasets/fmars/wiki_stem
The International Standard Book Number (ISBN) is a numeric commercial book identifier that is intended to be unique. Publishers purchase or receive ISBNs from an affiliate of the International ISBN Agency. An ISBN is assigned to each separate edition and variation (except reprintings) of a publication
https://huggingface.co/datasets/fmars/wiki_stem
"Bookland" is the informal name for the Unique Country Code (UCC) prefix allocated in the 1980s for European Article Number (EAN) identifiers of published books, regardless of country of origin, so that the EAN namespace can catalogue books by ISBN rather than maintaining a redundant parallel numbering system. In other words, Bookland is a fictitious country that exists solely in EAN for the purposes of non-geographically cataloguing books in the otherwise geographically keyed EAN coding system. History Until January 1, 2007, all ISBNs were allocated as 9-digit numbers followed by a modulo 11 checksum character that was either a decimal digit or the letter "X"
https://huggingface.co/datasets/fmars/wiki_stem
ISBNdb. com is a large online database of book information available both via web interface and API. The database includes title, author, ISBN, ISBN13, publisher, publishing date, binding, pages, list price, and more
https://huggingface.co/datasets/fmars/wiki_stem
The five laws of library science is a theory that S. R. Ranganathan proposed in 1931, detailing the principles of operating a library system
https://huggingface.co/datasets/fmars/wiki_stem
The Millennium Development Goals (MDGs) were eight international development goals for the year 2015 that had been established following the Millennium Summit of the United Nations in 2000, following the adoption of the United Nations Millennium Declaration. These were based on the OECD DAC International Development Goals agreed by Development Ministers in the "Shaping the 21st Century Strategy". The Sustainable Development Goals (SDGs) succeeded the MDGs in 2016
https://huggingface.co/datasets/fmars/wiki_stem
Library 2. 0 is a proposed concept for library services that facilitate user contributions and other features of Web 2. 0, which includes online services such as OPAC systems
https://huggingface.co/datasets/fmars/wiki_stem
A bibliographic database is a database of bibliographic records. This is an organised online collection of references to published written works like journal and newspaper articles, conference proceedings, reports, government and legal publications, patents and books. In contrast to library catalogue entries, a majority of the records in bibliographic databases describe articles and conference papers rather than complete monographs, and they generally contain very rich subject descriptions in the form of keywords, subject classification terms, or abstracts
https://huggingface.co/datasets/fmars/wiki_stem
Capital Collections is Edinburgh Libraries' online image library. The project was initiated to provide greater access to some of the 100,000 images within its collections. The website was launched in February 2008 with an accompanying exhibition, entitled “Edinburgh Past and Present”, featuring images chosen by personalities connected with the city
https://huggingface.co/datasets/fmars/wiki_stem
ChemXSeer project, funded by the National Science Foundation, is a public integrated digital library, database, and search engine for scientific papers in chemistry. It is being developed by a multidisciplinary team of researchers at the Pennsylvania State University. ChemXSeer was conceived by Dr
https://huggingface.co/datasets/fmars/wiki_stem
The Citation Style Language (CSL) is an open XML-based language to describe the formatting of citations and bibliographies. Reference management programs using CSL include Zotero, Mendeley and Papers. The Pandoc lightweight document conversion system also supports citations in CSL, YAML, and JSON formats and can render these using any of the CSL styles listed in the Zotero Style Repository
https://huggingface.co/datasets/fmars/wiki_stem
CiteSeerX (formerly called CiteSeer) is a public search engine and digital library for scientific and academic papers, primarily in the fields of computer and information science. CiteSeer's goal is to improve the dissemination and access of academic and scientific literature. As a non-profit service that can be freely used by anyone, it has been considered as part of the open access movement that is attempting to change academic and scientific publishing to allow greater access to scientific literature
https://huggingface.co/datasets/fmars/wiki_stem
ContextObjects in Spans (COinS) is a method to embed bibliographic metadata in the HTML code of web pages. This allows bibliographic software to publish machine-readable bibliographic items and client reference management software to retrieve bibliographic metadata. The metadata can also be sent to an OpenURL resolver
https://huggingface.co/datasets/fmars/wiki_stem
Connotea was a free online reference management service for scientists, researchers, and clinicians, created in December 2004 by Nature Publishing Group and discontinued in March 2013. It was one of a breed of social bookmarking tools, similar to CiteULike and del. icio
https://huggingface.co/datasets/fmars/wiki_stem
DOCAM (Documentation and Conservation of the Media Arts Heritage) was an international alliance of researchers from various institutions and disciplines dedicated to the documentation and conservation of media arts. The project was the result of a five-year mandate lasting from 2005 until 2010. Outcomes of the project include a cataloguing guide incorporating case studies, a conservation guide explaining preservation issues specific to time-based media, a technological timeline, a documentation model for digital curation and preservation of time-based media, and a glossary and thesaurus for media arts
https://huggingface.co/datasets/fmars/wiki_stem
ERAMS (e-resource access and management services) are a way of thinking about library management to help libraries optimize the access, usage, data, and workflows of electronic library collections in the physical and digital library. Background Electronic resources, particularly electronic journals and ebooks, can be viewed as an integral part of library collections. Recent studies have shown that not only are libraries acquiring significant amounts of digital content, but also that this content is both replacing and eclipsing traditional media
https://huggingface.co/datasets/fmars/wiki_stem
EZproxy is a web proxy server used by libraries to give access from outside the library's computer network to restricted-access websites that authenticate users by IP address. This allows library patrons at home or elsewhere to log in through their library's EZproxy server and gain access to resources to which their library subscribes, such as bibliographic databases. The software was originally written by Chris Zagar in 1999 who founded Useful Utilities LLC to support it
https://huggingface.co/datasets/fmars/wiki_stem
finc (find in catalog) is an open consortium comprising various university libraries that jointly operate and develop bibliographic search engines (cf. discovery systems) for their users. The consortium focuses on free and open source software as well as being independent of lesser transparent solution, such as commercial bibliographic indexes
https://huggingface.co/datasets/fmars/wiki_stem
Functional Requirements for Authority Data (FRAD), formerly known as Functional Requirements for Authority Records (FRAR) is a conceptual entity-relationship model developed by the International Federation of Library Associations and Institutions (IFLA) for relating the data that are recorded in library authority records to the needs of the users of those records and facilitate and sharing of that data. The draft was presented in 2004 at the 70th IFLA General Conference and Council in Buenos Aires by Glenn Patton. It is an extension and expansion to the FRBR model, adding numerous entities and attributes
https://huggingface.co/datasets/fmars/wiki_stem
The IFLA Library Reference Model (IFLA LRM) is a conceptual entity–relationship model developed by the International Federation of Library Associations and Institutions (IFLA) that expresses the "logical structure of bibliographic information". It unifies the models of Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD) and Functional Requirements for Subject Authority Data (FRSAD). The IFLA LRM is intended to be used as the basis of cataloguing rules and implementing bibliographic information systems
https://huggingface.co/datasets/fmars/wiki_stem
The Internet Speculative Fiction Database (ISFDB) is a database of bibliographic information on genres considered speculative fiction, including science fiction and related genres such as fantasy, alternate history, and horror fiction. The ISFDB is a volunteer effort, with the database being open for moderated editing and user contributions, and a wiki that allows the database editors to coordinate with each other. As of April 2022, the site had catalogued 2,002,324 story titles from 232,816 authors
https://huggingface.co/datasets/fmars/wiki_stem
CADP (Construction and Analysis of Distributed Processes) is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team (formerly by the VASY team) at INRIA Rhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects
https://huggingface.co/datasets/fmars/wiki_stem
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them. It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals. Soon after, Tony Hoare gave the problem its present form
https://huggingface.co/datasets/fmars/wiki_stem
In computer science E-LOTOS (Enhanced LOTOS) is a formal specification language designed between 1993 and 1999, and standardized by ISO in 2001. E-LOTOS was initially intended to be a revision of the LOTOS language standardized by ISO 8807 in 1989, but the revision turned out to be profound, leading to a new specification language. The starting point for the revision of LOTOS was the PhD thesis of Ed Brinksma, who had been the Rapporteur at ISO of the LOTOS standard
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, Hennessy–Milner logic (HML) is a dynamic logic used to specify properties of a labeled transition system (LTS), a structure similar to an automaton. It was introduced in 1980 by Matthew Hennessy and Robin Milner in their paper "On observing nondeterminism and concurrency" (ICALP). Another variant of the HML involves the use of recursion to extend the expressibility of the logic, and is commonly referred to as 'Hennessy-Milner Logic with recursion'
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics and computer science, a history monoid is a way of representing the histories of concurrently running computer processes as a collection of strings, each string representing the individual history of a process. The history monoid provides a set of synchronization primitives (such as locks, mutexes or thread joins) for providing rendezvous points between a set of independently executing processes or threads. History monoids occur in the theory of concurrent computation, and provide a low-level mathematical foundation for process calculi, such as CSP the language of communicating sequential processes, or CCS, the calculus of communicating systems
https://huggingface.co/datasets/fmars/wiki_stem
In computer science Language Of Temporal Ordering Specification (LOTOS) is a formal specification language based on temporal ordering of events. LOTOS is used for communications protocol specification in International Organization for Standardization (ISO) Open Systems Interconnection model (OSI) standards. LOTOS is an algebraic language that consists of two parts: a part for the description of data and operations, based on abstract data types, and a part for the description of concurrent processes, based on process calculus
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a memory model describes the interactions of threads through memory and their shared use of the data. History and significance A memory model allows a compiler to perform many important optimizations. Compiler optimizations like loop fusion move statements in the program, which can influence the order of read and write operations of potentially shared variables
https://huggingface.co/datasets/fmars/wiki_stem
Memory ordering describes the order of accesses to computer memory by a CPU. The term can refer either to the memory ordering generated by the compiler during compile time, or to the memory ordering generated by a CPU during runtime. In modern microprocessors, memory ordering characterizes the CPU's ability to reorder memory operations – it is a type of out-of-order execution
https://huggingface.co/datasets/fmars/wiki_stem
Nets within Nets is a modelling method belonging to the family of Petri nets. This method is distinguished from other sorts of Petri nets by the possibility to provide their tokens with a proper structure, which is based on Petri net modelling again. Hence, a net can contain further net items, being able to move around and fire themselves
https://huggingface.co/datasets/fmars/wiki_stem
A Petri net, also known as a place/transition (PT) net, is one of several mathematical modeling languages for the description of distributed systems. It is a class of discrete event dynamic system. A Petri net is a directed bipartite graph that has two types of elements: places and transitions
https://huggingface.co/datasets/fmars/wiki_stem
In computing, the producer-consumer problem (also known as the bounded-buffer problem) is a family of problems described by Edsger W. Dijkstra since 1965. Dijkstra found the solution for the producer-consumer problem as he worked as a consultant for the Electrologica X1 and X8 computers: "The first use of producer-consumer was partly software, partly hardware: The component taking care of the information transport between store and peripheral was called 'a channel'
https://huggingface.co/datasets/fmars/wiki_stem
A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of the possible behaviors is undesirable. The term race condition was already in use by 1954, for example in David A
https://huggingface.co/datasets/fmars/wiki_stem
A racetrack problem is a specific instance of a type of race condition. A racetrack problem is a flaw in a system or process whereby the output and/or result of the process is unexpectedly and critically dependent on the sequence or timing of other events that run in a circular pattern. This problem is semantically different from a race condition because of the circular nature of the problem
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, the readers–writers problems are examples of a common computing problem in concurrency. There are at least three variations of the problems, which deal with situations in which many concurrent threads of execution try to access the same shared resource at one time. Some threads may read and some may write, with the constraint that no thread may access the shared resource for either reading or writing while another thread is in the act of writing to it
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on multiple processors, or on a single-processor system, where a reentrant procedure can be interrupted in the middle of its execution and then safely be called again ("re-entered") before its previous invocations complete execution. The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion, where new invocations can only be caused by internal call. This definition originates from multiprogramming environments, where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to an interrupt service routine (ISR) or "handler" subroutine
https://huggingface.co/datasets/fmars/wiki_stem
Reo is a domain-specific language for programming and analyzing coordination protocols that compose individual processes into full systems, broadly construed. Examples of classes of systems that can be composed with Reo include component-based systems, service-oriented systems, multithreading systems, biological systems, and cryptographic protocols. Reo has a graphical syntax in which every Reo program, called a connector or circuit, is a labeled directed hypergraph
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions
https://huggingface.co/datasets/fmars/wiki_stem
In type theory, session types are used to ensure correctness in concurrent programs. They guarantee that messages sent and received between concurrent programs are in the expected order and of the expected type. Session type systems have been adapted for both channel and actor systems
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem that illustrates the complexities that arise when there are multiple operating system processes. The problem was originally proposed in 1965 by computer science pioneer Edsger Dijkstra, who used it to make the point that general semaphores are often superfluous. Problem statement Imagine a hypothetical barbershop with one barber, one barber chair, and a waiting room with n chairs (n may be 0) for waiting customers
https://huggingface.co/datasets/fmars/wiki_stem
Speculative execution is an optimization technique where a computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, resource starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb. When starvation is impossible in a concurrent algorithm, the algorithm is called starvation-free, lockout-freed or said to have finite bypass
https://huggingface.co/datasets/fmars/wiki_stem