text
stringlengths
31
999
source
stringclasses
5 values
Jewels of Stringology: Text Algorithms is a book on algorithms for pattern matching in strings and related problems. It was written by Maxime Crochemore and Wojciech Rytter, and published by World Scientific in 2003. Topics The first topics of the book are two basic string-searching algorithms for finding exactly-matching substrings, the Knuth–Morris–Pratt algorithm and the Boyer–Moore string-search algorithm
https://huggingface.co/datasets/fmars/wiki_stem
"A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in Bell System Technical Journal in 1948. It was renamed The Mathematical Theory of Communication in the 1949 book of the same name, a small but significant title change after realizing the generality of this work
https://huggingface.co/datasets/fmars/wiki_stem
The Open-Source Lab: How to Build Your Own Hardware and Reduce Research Costs by Joshua M. Pearce was published in 2014 by Elsevier. The academic book is a guide, which details the development of free and open-source hardware primarily for scientists and university faculty
https://huggingface.co/datasets/fmars/wiki_stem
Paradigms of AI Programming: Case Studies in Common Lisp (ISBN 1-55860-191-0) is a well-known programming book by Peter Norvig about artificial intelligence programming using Common Lisp. History The Lisp programming language has survived since 1958 as a primary language for Artificial Intelligence research. This text was published in 1992 as the Common Lisp standard was becoming widely adopted
https://huggingface.co/datasets/fmars/wiki_stem
Perceptrons: an introduction to computational geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1987, containing a chapter dedicated to counter the criticisms made of it in the 1980s
https://huggingface.co/datasets/fmars/wiki_stem
Prentice Hall International Series in Computer Science was a series of books on computer science published by Prentice Hall. The series' founding editor was Tony Hoare. Richard Bird subsequently took over editing the series
https://huggingface.co/datasets/fmars/wiki_stem
Principles of Compiler Design, by Alfred Aho and Jeffrey Ullman, is a classic textbook on compilers for computer programming languages. Both of the authors won the 2020 Turing award for their work on compilers. It is often called the "green dragon book" and its cover depicts a knight and a dragon in battle; the dragon is green, and labeled "Complexity of Compiler Design", while the knight wields a lance and a shield labeled "LALR parser generator" and "Syntax Directed Translation" respectively, and rides a horse labeled "Data Flow Analysis"
https://huggingface.co/datasets/fmars/wiki_stem
Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos is a 2006 popular science book by Seth Lloyd, professor of mechanical engineering at the Massachusetts Institute of Technology. The book proposes that the Universe is a quantum computer (supercomputer), and advances in the understanding of physics may come from viewing entropy as a phenomenon of information, rather than simply thermodynamics. Lloyd also postulates that the Universe can be fully simulated using a quantum computer; however, in the absence of a theory of quantum gravity, such a simulation is not yet possible
https://huggingface.co/datasets/fmars/wiki_stem
Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture. It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation
https://huggingface.co/datasets/fmars/wiki_stem
Structure and Interpretation of Computer Programs, JavaScript Edition (SICP JS) is an adaptation of the computer science textbook Structure and Interpretation of Computer Programs (SICP). It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation. While the original version of SICP uses the programming language Scheme, this edition uses the programming language JavaScript
https://huggingface.co/datasets/fmars/wiki_stem
Unifying Theories of Programming (UTP) in computer science deals with program semantics. It shows how denotational semantics, operational semantics and algebraic semantics can be combined in a unified framework for the formal specification, design and implementation of programs and computer systems. The book of this title by C
https://huggingface.co/datasets/fmars/wiki_stem
The Visualization Handbook is a textbook by Charles D. Hansen and Christopher R. Johnson that serves as a survey of the field of scientific visualization by presenting the basic concepts and algorithms in addition to a current review of visualization research topics and tools
https://huggingface.co/datasets/fmars/wiki_stem
Formal Aspects of Computing (FAOC) is a peer-reviewed scientific journal published by Springer Science+Business Media, covering the area of formal methods and associated topics in computer science. The editor-in-chief is Jim Woodcock. According to the Journal Citation Reports, the journal has a 2010 impact factor of 1
https://huggingface.co/datasets/fmars/wiki_stem
The Journal of Automated Reasoning was established in 1983 by Larry Wos, who was its editor in chief until 1992. It covers research and advances in automated reasoning, mechanical verification of theorems, and other deductions in classical and non-classical logic. The journal is published by Springer Science+Business Media
https://huggingface.co/datasets/fmars/wiki_stem
Logical Methods in Computer Science (LMCS) is a peer-reviewed open access scientific journal covering theoretical computer science and applied logic. It opened to submissions on September 1, 2004. The editor-in-chief is Stefan Milius (Friedrich-Alexander Universität Erlangen-Nürnberg)
https://huggingface.co/datasets/fmars/wiki_stem
ACM Transactions on Applied Perception (ACM TAP) is a quarterly peer-reviewed scientific journal covering interdisciplinary computer science topics relevant to psychology and perception. It was established in 2004 by Erik Reinhard and Heinrich Buelthoff and is published by the Association for Computing Machinery. In 2016, the ACM Publications Board agreed to offer journal publication to the strongest submissions to the ACM Symposium on Applied Perception
https://huggingface.co/datasets/fmars/wiki_stem
The ACM Transactions on Database Systems (ACM TODS) is one of the journals produced by the Association for Computing Machinery. TODS publishes one volume yearly. Each volume has four issues, which appear in March, June, September and December
https://huggingface.co/datasets/fmars/wiki_stem
ACM Transactions on Graphics (TOG) is a bimonthly peer-reviewed scientific journal that covers the field of computer graphics. The editor-in-chief is Carol O'Sullivan (Trinity College Dublin). According to the Journal Citation Reports, the journal had a 2020 impact factor of 5
https://huggingface.co/datasets/fmars/wiki_stem
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) is a quarterly scientific journal that aims to disseminate the latest findings of note in the field of multimedia computing. It is published by the Association for Computing Machinery. In May 2014 the acronym has changed from TOMMCAP to TOMM
https://huggingface.co/datasets/fmars/wiki_stem
The ACM Transactions on Programming Languages and Systems (TOPLAS) is a bimonthly, open access, peer-reviewed scientific journal on the topic of programming languages published by the Association for Computing Machinery. Background Published since 1979, the journal's scope includes programming language design, implementation, and semantics of programming languages, compilers and interpreters, run-time systems, storage allocation and garbage collection, and formal specification, testing, and verification of software. It is indexed in Scopus and SCImago
https://huggingface.co/datasets/fmars/wiki_stem
Adaptive Behavior is a bimonthly peer-reviewed scientific journal that covers the field of adaptive behavior in living organisms and autonomous artificial systems. It was established in 1992 and is the official journal of the International Society of Adaptive Behavior. It is published by SAGE Publications
https://huggingface.co/datasets/fmars/wiki_stem
AI & Society is a quarterly peer-reviewed scientific journal published by Springer. The editor-in-chief is Karamjit S. Gill, Brighton University
https://huggingface.co/datasets/fmars/wiki_stem
Algorithmica is a monthly peer-reviewed scientific journal focusing on research and the application of computer science algorithms. The journal was established in 1986 and is published by Springer Science+Business Media. The editor in chief is Mohammad Hajiaghayi
https://huggingface.co/datasets/fmars/wiki_stem
Algorithms is a monthly peer-reviewed open-access scientific journal of mathematics, covering design, analysis, and experiments on algorithms. The journal is published by MDPI and was established in 2008. The founding editor-in-chief was Kazuo Iwama (Kyoto University)
https://huggingface.co/datasets/fmars/wiki_stem
Applied and Computational Harmonic Analysis is a bimonthly peer-reviewed scientific journal published by Elsevier. The journal covers studies on the applied and computational aspects of harmonic analysis. Its editors-in-chief are Ronald Coifman (Yale University) and David Donoho (Stanford University)
https://huggingface.co/datasets/fmars/wiki_stem
Archives of Computational Methods in Engineering is a scholarly journal that provides a forum for spreading results of research and advanced industrial practice in computational engineering with particular emphasis on mechanics and its related areas. It publishes reviews presenting developments in computational engineering. Subjects covered Areas of research published in the journal include modeling; solution techniques and applications of computational methods in areas including liquid and gas dynamics, solid and structural mechanics, biomechanics); variational formulations and numerical algorithms related to implementation of the finite and boundary element methods; finite difference and finite volume methods and other computational methods
https://huggingface.co/datasets/fmars/wiki_stem
Artificial Intelligence is a scientific journal on artificial intelligence research. It was established in 1970 and is published by Elsevier. The journal is abstracted and indexed in Scopus and Science Citation Index
https://huggingface.co/datasets/fmars/wiki_stem
Artificial Life is a peer-reviewed scientific journal that covers the study of man-made systems that exhibit the behavioral characteristics of natural living systems. Its articles cover system synthesis in software, hardware, and wetware. Artificial Life was established in 1993 and is the official journal of the International Society of Artificial Life
https://huggingface.co/datasets/fmars/wiki_stem
Autonomous Agents and Multi-Agent Systems is a peer-reviewed scientific journal covering the study of autonomous agents and multi-agent systems. It is published bimonthly by Springer Science+Business Media and is the official journal of the International Foundation for Autonomous Agents and Multiagent Systems. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1
https://huggingface.co/datasets/fmars/wiki_stem
BMC Medical Informatics and Decision Making is an open-access scientific journal covering all areas of medical informatics, biostatistics, and computer science. According to the Journal Citation Reports, the journal had a 2020 impact factor of 2. 796
https://huggingface.co/datasets/fmars/wiki_stem
Cognitive Systems Research is a scientific journal that covers all topics in the study of cognitive science, both natural and artificial cognitive systems. Its founding editors-in-chief were Ron Sun, Vasant Honavar, and Gregg Oden (from 1999 to 2014). It is published by Elsevier
https://huggingface.co/datasets/fmars/wiki_stem
Combinatorica is an international journal of mathematics, publishing papers in the fields of combinatorics and computer science. It started in 1981, with László Babai and László Lovász as the editors-in-chief with Paul Erdős as honorary editor-in-chief. The current editors-in-chief are Imre Bárány and József Solymosi
https://huggingface.co/datasets/fmars/wiki_stem
Computational and Mathematical Organization Theory is a quarterly double-blind peer-reviewed scientific journal covering the field of organization theory. The journal is published by Springer Science+Business Media. It was established in 1995 and initially published by Kluwer
https://huggingface.co/datasets/fmars/wiki_stem
Computational Mechanics is a monthly scientific journal focused on computational mechanics. It is published by Springer and was founded in 1986. The journal reports original research in computational mechanics
https://huggingface.co/datasets/fmars/wiki_stem
The Computer Journal is a peer-reviewed scientific journal covering computer science and information systems. Established in 1958, it is one of the oldest computer science research journals. It is published by Oxford University Press on behalf of BCS, The Chartered Institute for IT
https://huggingface.co/datasets/fmars/wiki_stem
The Computer Law & Security Review is a journal accessible to a wide range of professional legal and IT practitioners, businesses, academics, researchers, libraries and organisations in both the public and private sectors, the Computer Law and Security Review regularly covers: CLSR Briefing with special emphasis on UK/US developments European Union update National news from 10 European jurisdictions Pacific rim news column Refereed practitioner and academic papers on topics such as Web 2. 0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection and freedom of information and many other topics. The Journal's Correspondent Panel includes more than 40 specialists in IT law and security
https://huggingface.co/datasets/fmars/wiki_stem
Computers & Chemical Engineering is an international, peer-reviewed scientific journal in the field of process systems engineering. The journal accepts general papers on process systems engineering, as well as emerging new areas and topics for new developments in the application of computing and systems technology to chemical engineering problems. The journal was founded in 1977 and is published 12 times a year
https://huggingface.co/datasets/fmars/wiki_stem
Computers & Graphics is a peer-reviewed scientific journal that covers computer graphics and related subjects such as data visualization, human-computer interaction, virtual reality, and augmented reality. It was established in 1975 and originally published by Pergamon Press. It is now published by Elsevier, which acquired Pergamon Press in 1991
https://huggingface.co/datasets/fmars/wiki_stem
Data Mining and Knowledge Discovery is a bimonthly peer-reviewed scientific journal focusing on data mining published by Springer Science+Business Media. It was started in 1996 and launched in 1997 by Usama Fayyad as founding Editor-in-Chief by Kluwer Academic Publishers (later becoming Springer). The first Editorial provides a summary of why it was started
https://huggingface.co/datasets/fmars/wiki_stem
Data Technologies and Applications (DTA) is a peer-reviewed academic, interdisciplinary journal concerning any topic related to web science, data analytics and digital information management. It is published quarterly by Emerald Group Publishing Limited. The journal was previously called Program: Electronic Library and Information Systems but in 2018 the name changed to Data Technologies and Applications
https://huggingface.co/datasets/fmars/wiki_stem
Discrete Mathematics & Theoretical Computer Science is a peer-reviewed open access scientific journal covering discrete mathematics and theoretical computer science. It was established in 1997 by Daniel Krob (Paris Diderot University). Since 2001, the editor-in-chief is Jens Gustedt (Institut National de Recherche en Informatique et en Automatique)
https://huggingface.co/datasets/fmars/wiki_stem
Electronic Proceedings in Theoretical Computer Science is an international, peer-reviewed, open access series published by Open Publishing Association reporting research results in theoretical computer science, especially in the form of proceedings and post-proceedings of conferences and workshops, in the field of theoretical computer science. As of December 2009, the editor-in-chief of the series is Rob van Glabbeek. The series is indexed by the Digital Bibliography & Library Project (DBLP)
https://huggingface.co/datasets/fmars/wiki_stem
Empirical Software Engineering is a peer-reviewed scientific journal published by Springer Nature. It was established in 1996 and covers the area of empirical software engineering. The editors-in-chief are Robert Feldt and Thomas Zimmermann
https://huggingface.co/datasets/fmars/wiki_stem
Ethics and Information Technology is a quarterly peer-reviewed scientific journal covering the intersection between moral philosophy and the field of information and communications technology. It was established in 1999 by Jeroen van den Hoven (Delft University of Technology), who has been its editor-in-chief ever since. It is published by Springer Science+Business Media
https://huggingface.co/datasets/fmars/wiki_stem
Evolutionary Computation is a peer-reviewed academic journal published four times a year by the MIT Press. The journal serves as an international forum for researchers exchanging information in the field which deals with computational systems drawing their inspiration from nature. According to the Journal Citation Reports, the journal has a 2016 impact factor of 3
https://huggingface.co/datasets/fmars/wiki_stem
Graphical Models is an academic journal in computer graphics and geometry processing publisher by Elsevier. As of 2021, its editor-in-chief is Bedrich Benes of the Purdue University. History This journal has gone through multiple names
https://huggingface.co/datasets/fmars/wiki_stem
Higher-Order and Symbolic Computation (formerly LISP and Symbolic Computation) was a computer science journal published by Springer Science+Business Media. It focuses on programming concepts and abstractions and programming language theory. The final issue appeared in 2013
https://huggingface.co/datasets/fmars/wiki_stem
The ICGA Journal is a quarterly academic journal published by the International Computer Games Association. It was renamed in 2000. Its previous name was the ICCA Journal of the International Computer Chess Association, which was founded in 1977
https://huggingface.co/datasets/fmars/wiki_stem
Proceedings of the Institution of Electrical Engineers was a series journals which published the proceedings of the Institution of Electrical Engineers. It was originally established as the Journal of the Society of Telegraph Engineers in 1872, and was known under several titles over the years, such as Journal of the Institution of Electrical Engineers, Proceedings of the IEE and IEE Proceedings. History The journal was originally established in 1872, as Journal of the Society of Telegraph Engineers (1872–1880)Then underwent a series of name changes Journal of the Society of Telegraph Engineers and of Electricians (1881–1882) Journal of the Society of Telegraph-Engineers and Electricians (1883–1888)Until in 1889 it settled into Journal of the Institution of Electrical Engineers (1889–1940)The journal remained under that name for over 50 years
https://huggingface.co/datasets/fmars/wiki_stem
IEEE/ACM Transactions on Networking is a bimonthly peer-reviewed scientific journal covering communication networks. It is published by the IEEE Communications Society, the IEEE Computer Society, and the ACM Special Interest Group on Data Communications. The current editor-in-chief is Sanjay Shakkottai (University of Texas)
https://huggingface.co/datasets/fmars/wiki_stem
Multimedia search enables information search using queries in multiple data types including text and other multimedia formats. Multimedia search can be implemented through multimodal search interfaces, i. e
https://huggingface.co/datasets/fmars/wiki_stem
Multimodal search is a type of search that uses different methods to get relevant results. They can use any kind of search, search by keyword, search by concept, search by example, etc. Introduction A multimodal search engine is designed to imitate the flexibility and agility of how the human mind works to create, process and refuse irrelevant ideas
https://huggingface.co/datasets/fmars/wiki_stem
Noisy text analytics is a process of information extraction whose goal is to automatically extract structured or semistructured information from noisy unstructured text data. While Text analytics is a growing and mature field that has great value because of the huge amounts of data being produced, processing of noisy text is gaining in importance because a lot of common applications produce noisy text data. Noisy unstructured text data is found in informal settings such as online chat, text messages, e-mails, message boards, newsgroups, blogs, wikis and web pages
https://huggingface.co/datasets/fmars/wiki_stem
Online search is the process of interactively searching for and retrieving requested information via a computer from databases that are online. Interactive searches became possible in the 1980s with the advent of faster databases and smart terminals. In contrast, computerized batch searching was prevalent in the 1960s and 1970s
https://huggingface.co/datasets/fmars/wiki_stem
Question answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP) that is concerned with building systems that automatically answer questions that are posed by humans in a natural language. Overview A question-answering implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base. More commonly, question-answering systems can pull answers from an unstructured collection of natural language documents
https://huggingface.co/datasets/fmars/wiki_stem
Semantic search denotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query. Semantic search seeks to improve search accuracy by understanding the searcher's intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Content that ranks well in semantic search is well-written in a natural voice, focuses on the user's intent, and considers related topics that the user may look for in the future
https://huggingface.co/datasets/fmars/wiki_stem
Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is very useful. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result
https://huggingface.co/datasets/fmars/wiki_stem
XML retrieval, or XML information retrieval, is the content-based retrieval of documents structured with XML (eXtensible Markup Language). As such it is used for computing relevance of XML documents. Queries Most XML retrieval approaches do so based on techniques from the information retrieval (IR) area, e
https://huggingface.co/datasets/fmars/wiki_stem
The Cranfield experiments were a series of experimental studies in information retrieval conducted by Cyril W. Cleverdon at the College of Aeronautics, today known as Cranfield University, in the 1960s to evaluate the efficiency of indexing systems. The experiments were broken into two main phases, neither of which was computerized
https://huggingface.co/datasets/fmars/wiki_stem
Discounted cumulative gain (DCG) is a measure of ranking quality for a given query; Normalized DCG (nDCG or NDCG) is a measure of ranking quality independent of the particular query. In information retrieval, they are often used to measure effectiveness of web search engine algorithms or related applications. Using a graded relevance scale of documents in a search-engine result set, DCG measures the usefulness, or gain, of a document based on its position in the result list
https://huggingface.co/datasets/fmars/wiki_stem
In information retrieval, dwell time denotes the time which a user spends viewing a document after clicking a link on a search engine results page (SERP). Dwell time is the duration between when a user clicks on a search engine result, and when the user returns from that result, or is otherwise seen to have left the result. It is a relevance indicator of the search result correctly satisfying the intent of the user
https://huggingface.co/datasets/fmars/wiki_stem
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability
https://huggingface.co/datasets/fmars/wiki_stem
In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user. Relevance may include concerns such as timeliness, authority or novelty of the result. History The concern with the problem of finding relevant information dates back at least to the first publication of scientific journals in the 17th century
https://huggingface.co/datasets/fmars/wiki_stem
Relevance feedback is a feature of some information retrieval systems. The idea behind relevance feedback is to take the results that are initially returned from a given query, to gather user feedback, and to use information about whether or not those results are relevant to perform a new query. We can usefully distinguish between three types of feedback: explicit feedback, implicit feedback, and blind or "pseudo" feedback
https://huggingface.co/datasets/fmars/wiki_stem
The Sørensen–Dice coefficient (see below for other names) is a statistic used to gauge the similarity of two samples. It was independently developed by the botanists Thorvald Sørensen and Lee Raymond Dice, who published in 1948 and 1945 respectively. Name The index is known by several other names, especially Sørensen–Dice index, Sørensen index and Dice's coefficient
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, Universal IR Evaluation (information retrieval evaluation) aims to develop measures of database retrieval performance that shall be comparable across all information retrieval tasks. Measures of "relevance" IR (information retrieval) evaluation begins whenever a user submits a query (search term) to a database. If the user is able to determine the relevance of each document in the database (relevant or not relevant), then for each query, the complete set of documents is naturally divided into four distinct (mutually exclusive) subsets: relevant documents that are retrieved, not relevant documents that are retrieved, relevant documents that are not retrieved, and not relevant documents that are not retrieved
https://huggingface.co/datasets/fmars/wiki_stem
The Clearinghouse for Networked Information Discovery and Retrieval or CNIDR was an organization funded by the U. S. National Science Foundation from 1993 to 1997 and based at the Microelectronics Center of North Carolina (MCNC) in Research Triangle Park
https://huggingface.co/datasets/fmars/wiki_stem
The Conference and Labs of the Evaluation Forum (formerly Cross-Language Evaluation Forum), or CLEF, is an organization promoting research in multilingual information access (currently focusing on European languages). Its specific functions are to maintain an underlying framework for testing information retrieval systems and to create repositories of data for researchers to use in developing comparable standards. The organization holds a conference every September in Europe since a first constituting workshop in 2000
https://huggingface.co/datasets/fmars/wiki_stem
DataNet, or Sustainable Digital Data Preservation and Access Network Partner was a research program of the U. S. National Science Foundation Office of Cyberinfrastructure
https://huggingface.co/datasets/fmars/wiki_stem
The European Summer School in Information Retrieval (ESSIR) is a scientific event founded in 1990, which starts off a series of Summer Schools to provide high-quality teaching of information retrieval on advanced topics. ESSIR is typically a week-long event consisting of guest lectures and seminars from invited lecturers who are recognized experts in the field. The aim of ESSIR is to give to its participants a common ground in different aspects of Information Retrieval (IR)
https://huggingface.co/datasets/fmars/wiki_stem
IFACnet, the KnowledgeNet for Professional Accountants, was the global, multilingual search engine developed by the International Federation of Accountants (IFAC) and its members to provide professional accountants worldwide with one-stop access to good practice guidance, articles, management tools and other resources. This enterprise search engine was launched on October 2, 2006, by INDEZ. Originally marketed to professional accountants in business, IFACnet was expanded in March 2007 to provide resources and information relevant to small and medium accounting practices
https://huggingface.co/datasets/fmars/wiki_stem
The International Society for Music Information Retrieval (ISMIR) is an international forum for research on the organization of music-related data. It started as an informal group steered by an ad hoc committee in 2000 which established a yearly symposium - whence "ISMIR", which meant International Symposium on Music Information Retrieval. It was turned into a conference in 2002 while retaining the acronym
https://huggingface.co/datasets/fmars/wiki_stem
SIGIR is the Association for Computing Machinery's Special Interest Group on Information Retrieval. The scope of the group's specialty is the theory and application of computers to the acquisition, organization, storage, retrieval and distribution of information; emphasis is placed on working with non-numeric information, ranging from natural language to highly structured data bases. Conferences The annual international SIGIR conference, which began in 1978, is considered the most important in the field of information retrieval
https://huggingface.co/datasets/fmars/wiki_stem
The Text REtrieval Conference (TREC) is an ongoing series of workshops focusing on a list of different information retrieval (IR) research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology (NIST) and the Intelligence Advanced Research Projects Activity (part of the office of the Director of National Intelligence), and began in 1992 as part of the TIPSTER Text program. Its purpose is to support and encourage research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies and to increase the speed of lab-to-product transfer of technology
https://huggingface.co/datasets/fmars/wiki_stem
agrep (approximate grep) is an open-source approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows. It selects the best-suited algorithm for the current query from a variety of the known fastest (built-in) string searching algorithms, including Manber and Wu's bitap algorithm based on Levenshtein distances
https://huggingface.co/datasets/fmars/wiki_stem
BRS/Search is a full-text database and information retrieval system. BRS/Search uses a fully inverted indexing system to store, locate, and retrieve unstructured data. It was the search engine that in 1977 powered Bibliographic Retrieval Services (BRS) commercial operations with 20 databases (including the first national commercial availability of MEDLINE); it has changed ownership several times during its development and is currently sold as Livelink ECM Discovery Server by Open Text Corporation
https://huggingface.co/datasets/fmars/wiki_stem
The EXtensible Cross-Linguistic Automatic Information Machine (EXCLAIM) was an integrated tool for cross-language information retrieval (CLIR), created at the University of California, Santa Cruz in early 2006, with some support for more than a dozen languages. The lead developers were Justin Nuger and Jesse Saba Kirchner. Early work on CLIR depended on manually constructed parallel corpora for each pair of languages
https://huggingface.co/datasets/fmars/wiki_stem
In Unix-like and some other operating systems, find is a command-line utility that locates files based on some user-specified criteria and either prints the pathname of each matched object or, if another action is requested, performs that action on each matched object. It initiates a search from a desired starting location and then recursively traverses the nodes (directories) of a hierarchical structure (typically a tree). find can traverse and search through different file systems of partitions belonging to one or more storage devices mounted under the starting directory
https://huggingface.co/datasets/fmars/wiki_stem
Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer
https://huggingface.co/datasets/fmars/wiki_stem
The MAtrixware REsearch Collection (MAREC) is a standardised patent data corpus available for research purposes. MAREC seeks to represent patent documents of several languages in order to answer specific research questions. It consists of 19 million patent documents in different languages, normalised to a highly specific XML schema
https://huggingface.co/datasets/fmars/wiki_stem
The following outline is provided as an overview of and topical guide to search engines. Search engine – information retrieval system designed to help find information stored on a computer system. The search results are usually presented as a list, and are commonly called hits
https://huggingface.co/datasets/fmars/wiki_stem
RetrievalWare is an enterprise search engine emphasizing natural language processing and semantic networks which was commercially available from 1992 to 2007 and is especially known for its use by government intelligence agencies. History RetrievalWare was initially created by Paul Nelson, Kenneth Clark, and Edwin Addison as part of ConQuest Software. Development began in 1989, but the software was not commercially available on a wide scale until 1992
https://huggingface.co/datasets/fmars/wiki_stem
In computer networks, a reverse DNS lookup or reverse DNS resolution (rDNS) is the querying technique of the Domain Name System (DNS) to determine the domain name associated with an IP address – the reverse of the usual "forward" DNS lookup of an IP address from a domain name. The process of reverse resolving of an IP address uses PTR records. rDNS involves searching domain name registry and registrar tables
https://huggingface.co/datasets/fmars/wiki_stem
A reverse telephone directory (also known as a gray pages directory, criss-cross directory or reverse phone lookup) is a collection of telephone numbers and associated customer details. However, unlike a standard telephone directory, where the user uses customer's details (such as name and address) in order to retrieve the telephone number of that person or business, a reverse telephone directory allows users to search by a telephone service number in order to retrieve the customer details for that service. Reverse telephone directories are used by law enforcement and other emergency services in order to determine the origin of any request for assistance, however these systems include both publicly accessible (listed) and private (unlisted) services
https://huggingface.co/datasets/fmars/wiki_stem
A search engine is an information retrieval system designed to help find information stored on a computer system. It is an information retrieval software program that discovers, crawls, transforms, and stores information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits
https://huggingface.co/datasets/fmars/wiki_stem
A database search engine is a search engine that operates on material stored in a digital database. Search engines Categories of search engine software include: Web search or full-text search (e. g
https://huggingface.co/datasets/fmars/wiki_stem
TeLQAS (Telecommunication Literature Question Answering System) is an experimental question answering system developed for answering English questions in the telecommunications domain. Architecture TeLQAS includes three main subsystems: an online subsystem, an offline subsystem, and an ontology. The online subsystem answers questions submitted by users in real time
https://huggingface.co/datasets/fmars/wiki_stem
Trip is a free clinical search engine. Its primary function is to help clinicians identify the best available evidence with which to answer clinical questions. Its roots are firmly in the world of evidence-based medicine
https://huggingface.co/datasets/fmars/wiki_stem
Instant indexing is a feature offered by Internet search engines that enables users to submit content for immediate inclusion into the index. Delayed inclusion Certain search engine services may require an extended period of time for inclusion, which is seen as a delay and causes frustration to website administrators who wish to have their websites appear in search engine results. Delayed inclusion may be due to the size of the index that the service must maintain or due to corporate, political or social policies
https://huggingface.co/datasets/fmars/wiki_stem
Prospective search, or persistent search, is a method of searching which determines which of a set of queries matches content in a corpus. Other names include document routing and percolate queries. It is sometimes called reverse search, but that can also refer to finding documents similar to a given document
https://huggingface.co/datasets/fmars/wiki_stem
The real-time web is a network web using technologies and practices that enable users to receive information as soon as it is published by its authors, rather than requiring that they or their software check a source periodically for updates. Difference from real-time computing The real-time web is different from real-time computing in that there is no knowing when, or if, a response will be received. The information types transmitted this way are often short messages, status updates, news alerts, or links to longer documents
https://huggingface.co/datasets/fmars/wiki_stem
Search/Retrieve via URL (SRU) is a standard search protocol for Internet search queries, utilizing Contextual Query Language (CQL), a standard query syntax for representing queries. SRU, along with the related Search/Retrieve via Web (SRW) service, were created by as part of the ZING (Z39. 50 International: Next Generation) initiative as successors to the Z39
https://huggingface.co/datasets/fmars/wiki_stem
A spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are also called web spiders, from which the name is derived. Spider traps may be created to "catch" spambots or other crawlers that waste a website's bandwidth
https://huggingface.co/datasets/fmars/wiki_stem
A sponsored search auction (SSA), also known as a keyword auction, is an indispensable part of the business model of modern web hosts. It refers to results from a search engine that are not output by the main search algorithm, but rather clearly separate advertisements paid for by third parties. These advertisements are typically related to the terms for which the user searched
https://huggingface.co/datasets/fmars/wiki_stem
NASA's Long Duration Exposure Facility, or LDEF (pronounced "eldef"), was a school bus-sized cylindrical facility designed to provide long-term experimental data on the outer space environment and its effects on space systems, materials, operations and selected spores' survival. It was placed in low Earth orbit by Space Shuttle Challenger in April 1984. The original plan called for the LDEF to be retrieved in March 1985, but after a series of delays it was eventually returned to Earth by Columbia in January 1990
https://huggingface.co/datasets/fmars/wiki_stem
Mars 2020 is a Mars rover mission that includes the rover Perseverance, the small robotic helicopter Ingenuity, and associated delivery systems, as part of NASA's Mars Exploration Program. Mars 2020 was launched from Earth on an Atlas V launch vehicle at 11:50:01 UTC on 30 July 2020, and confirmation of touch down in the Martian crater Jezero was received at 20:55 UTC on 18 February 2021. On 5 March 2021, NASA named the landing site of the rover Octavia E
https://huggingface.co/datasets/fmars/wiki_stem
Mars Science Laboratory (MSL) is a robotic space probe mission to Mars launched by NASA on November 26, 2011, which successfully landed Curiosity, a Mars rover, in Gale Crater on August 6, 2012. The overall objectives include investigating Mars' habitability, studying its climate and geology, and collecting data for a human mission to Mars. The rover carries a variety of scientific instruments designed by an international team
https://huggingface.co/datasets/fmars/wiki_stem
OREOcube (ORganics Exposure in Orbit cube) is an experiment designed by the European Space Agency (ESA) with the NASA that will investigate the effects of solar and cosmic radiation on selected organic compounds. It will consist in a 12-month orbital study of the effects of the outer space environment on astrobiologically relevant materials in an external exposure facility on the International Space Station (ISS). The project, which will be launched sometime in 2016, will examine the evolution of complex organic molecules in outer space, as well as the forms in which prebiotic organic compounds has been preserved
https://huggingface.co/datasets/fmars/wiki_stem
OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer) is a NASA asteroid-study and sample-return mission. The mission's primary goal is to obtain a sample of at least 60 g (2. 1 oz) from 101955 Bennu, a carbonaceous near-Earth asteroid, and return the sample to Earth for a detailed analysis
https://huggingface.co/datasets/fmars/wiki_stem
Phoenix was an uncrewed space probe that landed on the surface of Mars on May 25, 2008, and operated until November 2, 2008. Phoenix was operational on Mars for 157 sols (161 days). Its instruments were used to assess the local habitability and to research the history of water on Mars
https://huggingface.co/datasets/fmars/wiki_stem