text
stringlengths
12
14.7k
PCVC Speech Dataset : The Kaggle page of PCVC speech dataset PCVC Paper on ResearchGate
Persian Speech Corpus : The Persian Speech Corpus is a Modern Persian speech corpus for speech synthesis. The corpus contains phonetic and orthographic transcriptions of about 2.5 hours of Persian speech aligned with recorded speech on the phoneme level, including annotations of word boundaries. Previous spoken corpora...
Persian Speech Corpus : The corpus is downloadable from its website, and contains the following: 396 .wav files containing spoken utterances 396 .lab files containing text utterances 396 .TextGrid files containing the phoneme labels with time stamps of the boundaries where these occur in the .wav files. phonetic-transc...
Persian Speech Corpus : Comparison of datasets in machine learning
Persian Speech Corpus : The Persian Speech Corpus official website The Arabic Speech Corpus official website The Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
TIMIT : TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time. TIMIT was designed to further acoustic-phonetic knowledge and automatic speech recognition systems. It was commissioned by DARPA ...
TIMIT : TIMIT contains ~5 hours of speech, of 10 sentences spoken by each of 630 speakers. The sentences were randomly sampled from a corpus of 2342 sentences. The speakers were native speakers of American English, classified under 8 major dialect regions: New England, Northern, North Midland, South Midland, Southern, ...
TIMIT : The TIMIT telephone corpus was an early attempt to create a database with speech samples. It was published in the year 1988 on CD-ROM and consists of only 10 sentences per speaker. Two 'dialect' sentences were read by each speaker, as well as another 8 sentences selected from a larger set Each sentence averages...
TIMIT : Comparison of datasets in machine learning
TIMIT : TIMIT Acoustic-Phonetic Continuous Speech Corpus
Training, validation, and test data sets : In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data use...
Training, validation, and test data sets : A training data set is a data set of examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, a classifier. For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the ...
Training, validation, and test data sets : A validation data set is a data set of examples used to tune the hyperparameters (i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set". An example of a hyperparameter for artificial neural networks includes the number of hidden un...
Training, validation, and test data sets : A test data set is a data set that is independent of the training data set, but that follows the same probability distribution as the training data set. If a model fit to the training data set also fits the test data set well, minimal overfitting has taken place (see figure be...
Training, validation, and test data sets : Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render...
Training, validation, and test data sets : In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known as cross-validation. To confirm the model's performance, an additional test data set held out from cro...
Training, validation, and test data sets : Omissions in the training of algorithms are a major cause of erroneous outputs. Types of such omissions include: Particular circumstances or variations were not included. Obsolete data Ambiguous input information Inability to change to new environments Inability to request hel...
Training, validation, and test data sets : Statistical classification List of datasets for machine learning research Hierarchical classification == References ==
Amazon Rekognition : Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as priv...
Amazon Rekognition : Rekognition provides a number of computer vision capabilities, which can be divided into two categories: Algorithms that are pre-trained on data collected by Amazon or its partners, and algorithms that a user can train on a custom dataset. As of July 2019, Rekognition provides the following compute...
Amazon Rekognition : Amazon Lex Amazon Mechanical Turk Amazon Polly Amazon SageMaker Amazon Web Services Facial recognition system Timeline of Amazon Web Services == References ==
Angoss : Angoss Software Corporation, headquartered in Toronto, Ontario, Canada, with offices in the United States and UK, acquired by Datawatch and now owned by Altair, was a provider of predictive analytics systems through software licensing and services. Angoss' customers represent industries including finance, insu...
Angoss : KnowledgeREADER is an integrated customer intelligence product combining visual text discovery and predictive analytics for customer experience management. KnowledgeSEEKER is a data mining product. Its features include data profiling, data visualization and decision tree analysis. It was first released in 1990...
Angoss : FundGUARD is software as a service for marketing, sales targeting and predictive leads for mutual funds and wealth management companies. ClaimGUARD is a fraud and abuse detection service. Cloud on demand Software is offered for KnowledgeSEEKER, KnowledgeSTUDIO and its text analytics module. KnowledgeSCORE for ...
Angoss : List of statistical packages Predictive analytics
Angoss : Official website
Anne O'Tate : Anne O'Tate is a free, web-based application that analyses sets of records identified on PubMed, the bibliographic database of articles from over 5,500 biomedical journals worldwide. While PubMed has its own wide range of search options to identify sets of records relevant to a researchers query it lacks ...
Anne O'Tate : Once a set of articles has been identified using Anne O’Tate with its PubMed-like interface and search syntax, the set can be analysed and words and concepts mentioned in specific 'fields' (sections) of PubMed records can be displayed in order of frequency. ‘Fields’ which Anne O’Tate can display in this m...
Anne O'Tate : Anne O'Tate (a pun on the word ‘annotate’) was developed by Neil R Smalheiser and a team of researchers from the University of Chicago. It is part of the Arrowsmith Project, which developed tools such as “Arrowsmith” proper, a text-comparison application, "Adam", a database of medical abbreviations, and ‘...
Anne O'Tate : A wide range of text-mining applications for PubMed have been developed, using their own interface, such as GoPubMed, ClusterMed, or PubReMiner. Only Anne O’Tate uses PubMed’s standard interface, search syntax, and some of its functionality.
Anne O'Tate : Anne O'Tate PubMed Home Page Medical Subject Headings Fact Sheet "The Arrowsmith Project Homepage". University of Illinois at Chicago, Department of Psychiatry. December 20, 2007. Retrieved July 4, 2011.
Aphelion (software) : The Aphelion Imaging Software Suite is a software suite that includes three base products - Aphelion Lab, Aphelion Dev, and Aphelion SDK for addressing image processing and image analysis applications. The suite also includes a set of extension programs to implement specific vertical applications ...
Aphelion (software) : The development of Aphelion started in 1995 as a joint project of a French company, ADCIS S.A., and an American company, Amerinex Applied Imaging, Inc. (AAI) Aphelion's image processing and analysis functions were made from operators available from the KBVision software developed and sold by Ameri...
Aphelion (software) : Aphelion is a software suite to be used for image processing and image analysis. It supports 2D and 3D, monochrome, color, and multi-band images. It is developed by ADCIS, a French software house located in Saint-Contest, Calvados, Normandy. Aphelion is widely used in the scientific/industry commu...
Aphelion (software) : The Aphelion Imaging Software Suite is used by students, researchers, engineers, and software developers in many application domains involving image processing and computer vision, such as: security (surveillance, object tracking) remote sensing quality control for the industry and inspection appl...
Aphelion (software) : All products of the Aphelion Imaging Software Suite can be run on PC equipped with Windows (Vista, 7, 8, 8.1, or 10) 32 or 64 bits. An online help and video tutorials are available to the user.
Aphelion (software) : Below is a list of Aphelion optional extensions: 3D Image Processing and 3D Image Display: A set of extensions to display and process 3D images. The 3D display extension is based on the VTK software product. 3D Skeletonization: Extension to compute the 3D skeleton. Image Registration: Image regist...
Aphelion (software) : Official website Version history
BigDL : BigDL is a distributed deep learning framework for Apache Spark, created by Jason Dai at Intel. BigDL has its source code hosted on GitHub.
BigDL : Comparison of deep learning software == References ==
CellCognition : CellCognition is a free open-source computational framework for quantitative analysis of high-throughput fluorescence microscopy (time-lapse) images in the field of bioimage informatics and systems microscopy. The CellCognition framework uses image processing, computer vision and machine learning techni...
CellCognition : CellCognition uses a computational pipeline which includes image segmentation, object detection, feature extraction, statistical classification, tracking of individual cells over time, detection of class-transition motifs (e.g. cells entering mitosis), and HMM correction of classification errors on clas...
CellCognition : CellCognition (Version 1.0.1) was first released in December 2009 by scientists from the Gerlich Lab and the Buhmann group at the Swiss Federal Institute of Technology Zürich and the Ellenberg Lab at the European Molecular Biology Laboratory Heidelberg. The latest release is 1.6.1 and the software is de...
CellCognition : CellCognition has been used in RNAi-based screening, applied in basic cell cycle study, and extended to unsupervised modeling.
CellCognition : Official website CellCognition on GitHub
DADiSP : DADiSP (Data Analysis and Display, pronounced day-disp) is a numerical computing environment developed by DSP Development Corporation which allows one to display and manipulate data series, matrices and images with an interface similar to a spreadsheet. DADiSP is used in the study of signal processing, numeric...
DADiSP : DADiSP is designed to perform technical data analysis in a spreadsheet like environment. However, unlike a typical business spreadsheet that operates on a table of cells each of which contain single scalar values, a DADiSP Worksheet consists of multiple interrelated windows where each window contains an entire...
DADiSP : DADiSP includes a series based programming language called SPL (Series Processing Language) used to implement custom algorithms. SPL has a C/C++ like syntax and is incrementally compiled into intermediate bytecode, which is executed by a virtual machine. SPL supports both standard variables assigned with = and...
DADiSP : DADiSP was originally developed in the early 1980s, as part of a research project at MIT to explore the aerodynamics of Formula One racing cars. The original goal of the project was to enable researchers to quickly explore data analysis algorithms without the need for traditional programming.
DADiSP : DADiSP 6.7 B02, Jan 2017 DADiSP 6.7 B01, Oct 2015 DADiSP 6.5 B05, Dec 2012 DADiSP 6.5, May 2010 DADiSP 6.0, Sep 2002 DADiSP 5.0, Oct 2000 DADiSP 4.1, Dec 1997 DADiSP 4.0, Jul 1995 DADiSP 3.01, Feb 1993 DADiSP 2.0, Feb 1992 DADiSP 1.05, May 1989 DADiSP 1.03, Apr 1987
DADiSP : List of numerical-analysis software Comparison of numerical-analysis software
DADiSP : Allen Brown, Zhang Jun: First Course In Digital Signal Processing Using DADiSP, Abramis, ISBN 9781845495022 Charles Stephen Lessard: Signal Processing of Random Physiological Signals (Google eBook), Morgan & Claypool Publishers
DADiSP : DSP Development Corporation (DADiSP vendor) DADiSP Online Help DADiSP Tutorials Getting Started with DADiSP Introduction to DADiSP
Data Mining Extensions : Data Mining Extensions (DMX) is a query language for data mining models supported by Microsoft's SQL Server Analysis Services product. Like SQL, it supports a data definition language (DDL), data manipulation language (DML) and a data query language (DQL), all three with SQL-like syntax. Wherea...
Data Mining Extensions : DMX Queries are formulated using the SELECT statement. They can extract information from existing data mining models in various ways.
Data Mining Extensions : The data definition language (DDL) part of DMX can be used to Create new data mining models and mining structures - CREATE MINING STRUCTURE, CREATE MINING MODEL Delete existing data mining models and mining structures - DROP MINING STRUCTURE, DROP MINING MODEL Export and import mining structure...
Data Mining Extensions : The data manipulation language (DML) part of DMX can be used to Train mining models - INSERT INTO Browse data in mining models - SELECT FROM Make predictions using mining model - SELECT ... FROM PREDICTION JOIN
Data Mining Extensions : This example is a singleton prediction query, which predicts for the given customer whether she will be interested in home loan products.
Data Mining Extensions : Data Mining Extensions (DMX) Reference, (at MSDN)
Data Version Control (software) : DVC is a free and open-source, platform-agnostic version system for data, machine learning models, and experiments. It is designed to make ML models shareable, experiments reproducible, and to track versions of models, data, and pipelines. DVC works on top of Git repositories and cloud...
Data Version Control (software) : DVC is designed to incorporate the best practices of software development into Machine Learning workflows. It does this by extending the traditional software tool Git by cloud storages for datasets and Machine Learning models. Specifically, DVC makes Machine Learning operations: Codifi...
Data Version Control (software) : DVC stores large files and datasets in separate storage, outside of Git. This storage can be on the user’s computer or hosted on any major cloud storage provider, such as Amazon S3, Google Cloud Storage, and Microsoft Azure Blob Storage. DVC users may also set up a remote repository on...
Data Version Control (software) : DVC's features can be divided into three categories: data management, pipelines, and experiment tracking.
Data Version Control (software) : In 2022, Iterative released a free extension for Visual Studio Code (VS Code), a source-code editor made by Microsoft, which provides VS Code users with the ability to use DVC in their editors with additional user interface functionality.
Data Version Control (software) : In 2017, the first (beta) version of DVC 0.6 was publicly released (as a simple command line tool). It allowed data scientists to keep track of their machine learning processes and file dependencies in the simple form of git-like commands. It also allowed them to transform existing mac...
Data Version Control (software) : There are several open source projects that provide similar data version control capabilities to DVC, such as: Git LFS, Dolt, Nessie, and lakeFS. These projects vary in their fit to the different needs of data engineers and data scientists such as: scalability, supported file formats, ...
Data Version Control (software) : Official website dvc on GitHub VS Code extension
Deep Web Technologies : Deep Web Technologies is a software company that specializes in mining the Deep Web — the part of the Internet that is not directly searchable through ordinary web search engines. The company produces a proprietary software platform "Explorit" for searches. It also produces the federated search ...
Deep Web Technologies : Arnold, Stephen E. (June 10, 2008). "Deep Web Technologies: An Interview with Abe Lederman". ArnoldIT. Retrieved 2015-03-31. Mayfield, Dan (February 6, 2015). "For many companies, raising capital comes with one quirky rule". Albuquerque Business First. Retrieved 2015-03-31. Nguyen, Ivy (April 1,...
Distributed R : Distributed R is an open source, high-performance platform for the R language. It splits tasks between multiple processing nodes to reduce execution time and analyze large data sets. Distributed R enhances R by adding distributed data structures, parallelism primitives to run functions on distributed da...
Distributed R : Distributed R was begun in 2011 by Indrajit Roy, Shivaram Venkataraman, Alvin AuYoung, and Robert S. Schreiber as a research project at HP Labs. It was open sourced in 2014 under the GPLv2 license and is available at GitHub. In February 2015, Distributed R reached its first stable version 1.0, along wit...
Distributed R : Distributed R is a platform to implement and execute distributed applications in R. The goal is to extend R for distributed computing, while retaining the simplicity and look-and-feel of R. Distributed R consists of the following components: Distributed data structures: Distributed R extends R's common ...
Distributed R : HP Vertica provides tight integration with their database and the open source Distributed R platform. HP Vertica 7.1 includes features that enable fast, parallel loading from the Vertica database to Distribute R. This parallel Vertica loader can be more than five times (5x) faster than using traditional...
Distributed R : Official website
Dlib : Dlib is a general purpose cross-platform software library written in the programming language C++. Its design is heavily influenced by ideas from design by contract and component-based software engineering. Thus it is, first and foremost, a set of independent software components. It is open-source software relea...
Dlib : Comparison of deep learning software
Dlib : Official website DLib: Library for Machine Learning
ELKI : ELKI (Environment for Developing KDD-Applications Supported by Index-Structures) is a data mining (KDD, knowledge discovery in databases) software framework developed for use in research and teaching. It was originally created by the database systems research unit at the Ludwig Maximilian University of Munich, G...
ELKI : The ELKI framework is written in Java and built around a modular architecture. Most currently included algorithms perform clustering, outlier detection, and database indexes. The object-oriented architecture allows the combination of arbitrary algorithms, data types, distance functions, indexes, and evaluation m...
ELKI : The university project is developed for use in teaching and research. The source code is written with extensibility and reusability in mind, but is also optimized for performance. The experimental evaluation of algorithms depends on many environmental factors and implementation details can have a large impact on...
ELKI : ELKI is modeled around a database-inspired core, which uses a vertical data layout that stores data in column groups (similar to column families in NoSQL databases). This database core provides nearest neighbor search, range/radius search, and distance query functionality with index acceleration for a wide range...
ELKI : The visualization module uses SVG for scalable graphics output, and Apache Batik for rendering of the user interface as well as lossless export into PostScript and PDF for easy inclusion in scientific publications in LaTeX. Exported files can be edited with SVG editors such as Inkscape. Since cascading style she...
ELKI : Version 0.4, presented at the "Symposium on Spatial and Temporal Databases" 2011, which included various methods for spatial outlier detection, won the conference's "best demonstration paper award".
ELKI : Select included algorithms: Cluster analysis: K-means clustering (including fast algorithms such as Elkan, Hamerly, Annulus, and Exponion k-Means, and robust variants such as k-means--) K-medians clustering K-medoids clustering (PAM) (including FastPAM and approximations such as CLARA, CLARANS) Expectation-maxim...
ELKI : Version 0.1 (July 2008) contained several Algorithms from cluster analysis and anomaly detection, as well as some index structures such as the R*-tree. The focus of the first release was on subspace clustering and correlation clustering algorithms. Version 0.2 (July 2009) added functionality for time series anal...
ELKI : scikit-learn: machine learning library in Python Weka: A similar project by the University of Waikato, with a focus on classification algorithms RapidMiner: An application available commercially (a restricted version is available as open source) KNIME: An open source platform which integrates various components ...
ELKI : Comparison of statistical packages
ELKI : Official website of ELKI with download and documentation.
Feature Selection Toolbox : Feature Selection Toolbox (FST) is software primarily for feature selection in the machine learning domain, written in C++, developed at the Institute of Information Theory and Automation (UTIA), of the Czech Academy of Sciences.
Feature Selection Toolbox : The first generation of Feature Selection Toolbox (FST1) was a Windows application with user interface allowing users to apply several sub-optimal, optimal and mixture-based feature selection methods on data stored in a trivial proprietary textual flat file format.
Feature Selection Toolbox : The third generation of Feature Selection Toolbox (FST3) was a library without user interface, written to be more efficient and versatile than the original FST1. FST3 supports several standard data mining tasks, more specifically, data preprocessing and classification, but its main focus is ...
Feature Selection Toolbox : In 1999, development of the first Feature Selection Toolbox version started at UTIA as part of a PhD thesis. It was originally developed in Optima++ (later renamed Power++) RAD C++ environment. In 2002, the development of the first FST generation has been suspended, mainly due to end of Syba...
Feature Selection Toolbox : Feature selection Pattern recognition Machine learning Data mining OpenNN, Open neural networks library for predictive analytics Weka, comprehensive and popular Java open-source software from University of Waikato RapidMiner, formerly Yet Another Learning Environment (YALE) a commercial mach...
FICO : FICO (legal name: Fair Isaac Corporation), originally Fair, Isaac and Company, is an American data analytics company based in Bozeman, Montana, focused on credit scoring services. It was founded by Bill Fair and Earl Isaac in 1956. Its FICO score, a measure of consumer credit risk, has become a fixture of consum...
FICO : FICO was founded in 1956 as Fair, Isaac and Company by engineer William R. "Bill" Fair and mathematician Earl Judson Isaac. The two met while working at the Stanford Research Institute in Menlo Park, California. Selling its first credit scoring system two years after the company's creation, FICO pitched its syst...
FICO : DynaMark 1992 Risk Management Technologies 1997 Prevision 1997 Nykamp Consulting Group 2001 HNC Software 2002 NAREX 2003 Diversified Healthcare Services 2003 Seurat (2003) London Bridge Software 2004 Braun Consulting 2004 RulesPower 2005 Dash Optimization 2008 Entiera 2012 Adeptra 2012 CR Software 2012 Infoglide...
FICO : In March 2020, the US Department of Justice (DOJ) opened an antitrust investigation into FICO, which was reported to be closed in December 2020. In March 2024, US Senator Josh Hawley sent a letter to the DOJ's Antitrust Division urging them to open an investigation into FICO for anti-competitive practices, stati...
FICO : FICO is headquartered in Bozeman, Montana and it has additional U.S. locations in San Jose, California, Roseville, Minnesota, San Diego, California, San Rafael, California, Fairfax, Virginia, and Austin, Texas. The company has international locations in Australia, Brazil, Canada, China, Germany, India, Italy, Ja...
FICO : A measure of credit risk, FICO scores are available through all of the major consumer reporting agencies in the United States: Equifax, Experian, and TransUnion. FICO scores are also offered in other markets, including Mexico and Canada, as well as through the fourth U.S. credit reporting bureau, PRBC.
FICO : Official website Business data for Fair Isaac Corporation: How Does FICO Calculate a Score?