id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Patt/ReCoRD_TH_drop | 2023-07-20T15:29:42.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 16 | 2023-06-22T13:34:05 | ---
task_categories:
- text-classification
language:
- en
- th
dataset_info:
features:
- name: passage
dtype: string
- name: passage_TH
dtype: string
- name: query
dtype: string
- name: query_TH
dtype: string
- name: entities
sequence: string
- name: entities_TH
sequence: string
- name: entity_spans
struct:
- name: end
sequence: int64
- name: start
sequence: int64
- name: text
sequence: string
- name: answers
sequence: string
- name: answers_TH
sequence: string
- name: idx
struct:
- name: passage
dtype: int64
- name: query
dtype: int64
- name: score_passage
dtype: float64
- name: score_query
dtype: float64
- name: score_entities
dtype: float64
- name: score_answers
dtype: float64
splits:
- name: train
num_bytes: 281547282
num_examples: 57811
- name: validation
num_bytes: 32258456
num_examples: 6676
download_size: 112999233
dataset_size: 313805738
---
# Dataset Card for ReCoRD_TH_drop
### Dataset Description
This dataset is Thai translated version of [ReCoRD](https://huggingface.co/datasets/super_glue/viewer/record) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Drop every row that score_answers < 0.8 and every row that score < 0.5 after penalty. | 1,425 | [
[
-0.0273590087890625,
-0.036956787109375,
-0.006855010986328125,
0.0176544189453125,
-0.033203125,
-0.007320404052734375,
-0.016021728515625,
-0.0030460357666015625,
0.032684326171875,
0.035552978515625,
-0.060760498046875,
-0.0611572265625,
-0.041412353515625,
... |
tianleliphoebe/DreamEditBench | 2023-06-23T05:05:09.000Z | [
"task_categories:image-to-image",
"task_categories:text-to-image",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2306.12624",
"region:us"
] | tianleliphoebe | null | null | 6 | 16 | 2023-06-23T00:19:24 | ---
license: cc-by-4.0
task_categories:
- image-to-image
- text-to-image
language:
- en
size_categories:
- n<1K
---
## DreamEditBench for Subject Replacement task and Subject Addition task.
## Dataset Description
- **Homepage:** https://dreameditbenchteam.github.io
- **Repository:** https://github.com/DreamEditBenchTeam/DreamEdit
<!-- **Paper:** https://arxiv.org/abs/2306.12624 -->
The goal of subject replacement is to replace a subject from a source image with a customized subject. In contrast, the aim of the subject addition task is to add a customized
subject to a desired position in the source image. To standardize the evaluation of the two proposed tasks, we curate a new benchmark, i.e. DreamEditBench, consisting of 22 subjects in alignment with DreamBooth with 20 images for each subject correspondingly. For the subject replacement task, we collect 10 images for each type, which include same-typed source subjects in diverse environments. The images are retrieved from the
internet with the search query “a photo of [Class name]”, and the source subject should be the main subject in
the image which dominates a major part of the photo. For the subject addition task, we collect 10 reasonable
backgrounds for each type of subject. In the meantime, we manually designate the
specific location the target subject should be placed with a bounding box in the background. To collect the
specific backgrounds for each subject, we first brainstorm and list the possible common environments of the
subjects, then we search the listed keywords from the internet to retrieve and pick the backgrounds
## Data Structure
There are 22 subject folders in each task folder respectively. In each subject folder, there are 10 source images. For Subject Addition task, there is an additional bbox.json file recording the manually labeled bounding box for each background.
The replacement_subset.csv and addition_subset.csv record the easy/hard subset division for each task correspondingly.
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{li2023dreamedit,
title={DreamEdit: Subject-driven Image Editing},
author={Tianle Li and Max Ku and Cong Wei and Wenhu Chen},
year={2023},
eprint={2306.12624},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 2,346 | [
[
-0.039337158203125,
-0.03662109375,
0.045928955078125,
0.01184844970703125,
-0.0300750732421875,
-0.0102081298828125,
-0.028594970703125,
-0.0389404296875,
0.007965087890625,
0.05133056640625,
-0.048828125,
-0.06396484375,
-0.0285186767578125,
0.002061843872... |
iceberg-nlp/climabench | 2023-09-10T22:05:20.000Z | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"arxiv:2301.04253",
"region:us"
] | iceberg-nlp | The topic of Climate Change (CC) has received limited attention in NLP despite its real world urgency.
Activists and policy-makers need NLP tools in order to effectively process the vast and rapidly growing textual data produced on CC.
Their utility, however, primarily depends on whether the current state-of-the-art models can generalize across various tasks in the CC domain.
In order to address this gap, we introduce Climate Change Benchmark (Climabench), a benchmark collection of existing disparate datasets for evaluating model performance across a diverse set of CC NLU tasks systematically.
Further, we enhance the benchmark by releasing two large-scale labelled text classification and question-answering datasets curated from publicly available environmental disclosures.
Lastly, we provide an analysis of several generic and CC-oriented models answering whether fine-tuning on domain text offers any improvements across these tasks. We hope this work provides a standard assessment tool for research on CC text data. | @misc{laud2023Climabench,
title={ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English},
author={Tanmay Laud and Daniel Spokoyny and Tom Corringham and Taylor Berg-Kirkpatrick},
year={2023},
eprint={2301.04253},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 16 | 2023-06-29T22:37:24 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
paperswithcode_id: climabench
pretty_name: "ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English"
config_names:
- climate_stance
- climate_eng
- climate_fever
- climatext
- clima_insurance
- clima_insurance_plus
- clima_cdp
- clima_qa
---
### Citation Information
```
@misc{spokoyny2023answering,
title={Towards Answering Climate Questionnaires from Unstructured Climate Reports},
author={Daniel Spokoyny and Tanmay Laud and Tom Corringham and Taylor Berg-Kirkpatrick},
year={2023},
eprint={2301.04253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 843 | [
[
-0.0200347900390625,
-0.035400390625,
0.049957275390625,
0.006412506103515625,
-0.0078887939453125,
-0.004638671875,
-0.0140533447265625,
-0.010009765625,
0.04388427734375,
0.013671875,
-0.045318603515625,
-0.0308990478515625,
-0.03570556640625,
0.0159759521... |
FreedomIntelligence/evol-instruct-italian | 2023-08-06T08:13:27.000Z | [
"region:us"
] | FreedomIntelligence | null | null | 1 | 16 | 2023-06-30T03:43:55 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 124 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
Lurunchik/WikiHowNFQA | 2023-07-08T21:16:53.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"multi-document NFQA",
"non-factoid QA",
"region:us"
] | Lurunchik | null | null | 4 | 16 | 2023-07-03T03:14:31 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- multi-document NFQA
- non-factoid QA
pretty_name: wikihowqa
size_categories:
- 10K<n<100K
---
# Dataset Card for WikiHowQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Data Statistics](#data-statistics)
- [Dataset Information](#dataset-information)
- [Dataset Usage](#dataset-usage)
- [Additional Information](#additional-information)
- [Dataset Curators](#curators)
- [Licensing Information](#license)
- [Citation Information](#citation)
- [Considerations for Using the Data](#considerations)
- [Social Impact of Dataset](#social-impact)
- [Discussion of Biases](#biases)
- [Other Known Limitations](#limitations)
- [Data Loading](#data-loading)
<a name="dataset-description"></a>
## Dataset Description
- **Homepage:** [WikiHowQA Dataset](https://lurunchik.github.io/WikiHowQA/)
- **Repository:** [WikiHowQA Repository](https://github.com/lurunchik/WikiHowQA)
- **Paper:** [WikiHowQA Paper](https://lurunchik.github.io/WikiHowQA/data/ACL_MD_NFQA_dataset.pdf)
- **Leaderboard:** [WikiHowQA Leaderboard](https://lurunchik.github.io/WikiHowQA/leaderboard)
- **Point of Contact:** [Contact](mailto:s3802180@student.rmit.edu.au)
**WikiHowQA** is a unique collection of 'how-to' content from WikiHow, transformed into a rich dataset featuring 11,746 human-authored answers and 74,527 supporting documents. Designed for researchers, it presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.
<a name="dataset-structure"></a>
## Dataset Structure
### Data Fields
- `article_id`: An integer identifier for the article corresponding to article_id from WikHow API.
- `question`: The non-factoid instructional question.
- `answer`: The human-written answer to the question corresponding human-written answer article summary from [WikiHow website](https://www.wikihow.com/Main-Page).
- `related_document_urls_wayback_snapshots`: A list of URLs to web archive snapshots of related documents corresponding references from WikiHow article.
- `split`: The split of the dataset that the instance belongs to ('train', 'validation', or 'test').
- `cluster`: An integer identifier for the cluster that the instance belongs to. <!-- The dataset is split into 'train', 'validation', and 'test' such that all instances from the same cluster belong to the same split. This is to ensure that there is no intersection of paraphrased questions across different splits. If you plan to create a new split of the dataset, it is important to maintain this clustering to avoid data leakage between splits. -->
<a name="dataset-instances"></a>
### Data Instances
An example instance from the WikiHowQA dataset:
```json
{
'article_id': 1353800,
'question': 'How To Cook Pork Tenderloin',
'answer': 'To cook pork tenderloin, put it in a roasting pan and cook it in the oven for 55 minutes at 400 degrees Fahrenheit, turning it over halfway through. You can also sear the pork tenderloin on both sides in a skillet before putting it in the oven, which will reduce the cooking time to 15 minutes. If you want to grill pork tenderloin, start by preheating the grill to medium-high heat. Then, cook the tenderloin on the grill for 30-40 minutes over indirect heat, flipping it occasionally.',
'related_document_urls_wayback_snapshots': ['http://web.archive.org/web/20210605161310/https://www.allrecipes.com/recipe/236114/pork-roast-with-the-worlds-best-rub/', 'http://web.archive.org/web/20210423074902/https://www.bhg.com/recipes/how-to/food-storage-safety/using-a-meat-thermometer/', ...],
'split': 'train',
'cluster': 2635
}
```
<a name="dataset-statistics"></a>
### Dataset Statistics
- Number of human-authored answers: 11,746
- Number of supporting documents: 74,527
- Average number of documents per question: 6.3
- Average number of sentences per answer: 3.9
<a name="dataset-information"></a>
### Dataset Information
The WikiHowQA dataset is divided into two parts: the QA part and the Document Content part.
The QA part of the dataset contains questions, answers, and only links to web archive snapshots of related HTML pages and can be downloaded here.
The Document Content part contains parsed HTML content and is accessible by request and signing a Data Transfer Agreement with RMIT University.
Each dataset instance includes a question, a set of related documents, and a human-authored answer. The questions are non-factoid, requiring comprehensive, multi-sentence answers. The related documents provide the necessary information to generate an answer.
<a name="dataset-usage"></a>
## Dataset Usage
The dataset is designed for researchers and presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.
<a name="additional-information"></a>
## Additional Information
<a name="curators"></a>
### Dataset Curators
The WikiHowQA dataset was curated by researchers at RMIT University.
<a name="license"></a>
### Licensing Information
The QA dataset part is distributed under the Creative Commons Attribution (CC BY) license.
The Dataset Content part containing parsed HTML content is accessible by request and signing a Data Transfer Agreement with RMIT University, which allows using the dataset freely for research purposes. The form to download and sign is available on the dataset website by the link [].
<a name="citation"></a>
### Citation Information
Please cite the following paper if you use this dataset:
```bibtex
@inproceedings{bolotova2023wikihowqa,
title={WikiHowQA: A Comprehensive Benchmark for Multi-Document Non-Factoid Question Answering},
author={Bolotova, Valeriia and Blinov, Vladislav and Filippova, Sofya and Scholer, Falk and Sanderson, Mark},
booktitle="Proceedings of the 61th Conference of the Association for Computational Linguistics",
year={2023}
}
```
<a name="considerations"></a>
## Considerations for Using the Data
<a name="social-impact"></a>
### Social Impact of the Dataset
The WikiHowQA dataset is a rich resource for researchers interested in question answering, information retrieval, and natural language understanding tasks. It can help in developing models that provide comprehensive answers to how-to questions, which can be beneficial in various applications such as customer support, tutoring systems, and personal assistants. However, as with any dataset, the potential for misuse or unintended consequences exists. For example, a model trained on this dataset might be used to generate misleading or incorrect answers if not properly validated.
<a name="biases"></a>
### Discussion of Biases
The WikiHowQA dataset is derived from WikiHow, a community-driven platform. While WikiHow has guidelines to ensure the quality and neutrality of its content, biases could still be present due to the demographic and ideological characteristics of its contributors. Users of the dataset should be aware of this potential bias.
<a name="limitations"></a>
### Other Known Limitations
The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.
<a name="data-loading"></a>
## Data Loading
There are two primary ways to load the QA dataset part:
1. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code):
```python
import json
dataset = []
with open('wikiHowNFQA.jsonl') as f:
for l in f:
dataset.append(json.loads(l))
```
This will result in a list of dictionaries, each representing a single instance in the dataset.
2. From the Hugging Face Datasets Hub:
If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset('wikiHowNFQA')
```
This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects. You can access a specific split like so: dataset['train']. | 8,647 | [
[
-0.035064697265625,
-0.05047607421875,
0.00832366943359375,
-0.0082244873046875,
-0.01078033447265625,
-0.0034542083740234375,
-0.0010509490966796875,
-0.0113067626953125,
0.0154266357421875,
0.0171356201171875,
-0.04248046875,
-0.049774169921875,
-0.02075195312... |
teleprint-me/phi-1 | 2023-07-08T04:01:52.000Z | [
"license:cc-by-nc-sa-3.0",
"arxiv:2306.11644",
"region:us"
] | teleprint-me | null | null | 31 | 16 | 2023-07-04T03:49:40 | ---
title: 'Phi-1 Model Dataset'
date: '2023-07-03'
license: cc-by-nc-sa-3.0
---
## Dataset Description
- **Homepage:** [teleprint.me](https://teleprint.me)
- **Repository:** [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1)
- **Paper:** [2306.11644v1](https://arxiv.org/abs/2306.11644v1)
- **Leaderboard:** [Link to the leaderboard]
- **Point of Contact:** [aberrio@teleprint.me](aberrio@teleprint.me)
### Dataset Summary
This dataset is created for training the phi-1 model, based on the paper
"Textbooks are All You Need". It contains high-quality data derived from various
textbooks, transformed and synthesized using OpenAI's GPT-3.5 and GPT-4 models.
For optimal results, it is recommended to train models with the following
parameters and sequence lengths:
- For a model with 350M parameters, use a sequence length of 2048.
- For a model with 700M parameters, use a sequence length of 4096.
- For a model with 1.3B parameters, use a sequence length of 8096.
Please note that the dataset is currently in its initial phase of planning and
collection. The process involves preparing the data, extracting it, formatting
it, chunking it, and preparing it for synthesis. Scripts for preparing and
processing the data for the model will be developed. Once the data is generated,
it will undergo a review and revision process to ensure its quality and
relevance.
These recommendations and notes are based on the dataset creator's initial plans
and may be subject to change as the project progresses.
**NOTE**: Due to the nature of this dataset, it cannot be released without
obtaining permissions from the respective publishers and/or authors. If you are
an author or publisher and have any concerns about this repository, please feel
free to email me.
If you are an author or publisher and would like to grant permission for the use
of your work, your support would be greatly appreciated. Please note that in
order for the dataset to be released, permissions would need to be unanimous
from all involved parties.
In the absence of such permissions, I will respect the copyrights of the
copyrighted materials and exercise my right to Fair Use with my own physical
property for personal use.
**This dataset is NOT intended for commercial purposes**. Its primary purpose is
for research in machine learning and AI software development. If a model is
created using this dataset, it will be shared under the same license.
Any proceeds derived from donations will be primarily used for the development
of the dataset and the model.
### Supported Tasks and Leaderboards
- `text-generation`: The dataset can be used to train a model for chat-like text
generation, more specifically, for generating explanations and examples in the
context of arithmetic, algebra, geometry, trigonometry, calculus, algorithms
and data structures, design patterns, and the python programming language.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance consists of a dialogue between a user and an assistant,
discussing a topic in arithmetic, algebra, geometry, trigonometry, calculus,
algorithms and data structures, design patterns, or the Python programming
language. The dialogue is structured as a list of turns, each turn containing
the role ("user" or "assistant") and the content of the turn.
### Data Fields
- `role`: a string indicating the role of the speaker in the dialogue ("system",
"user", "assistant", "function").
- `content`: a string containing the content of the speaker's turn in the
dialogue.
### Data Splits
The dataset is split into a training set, a validation set, and a test set. The
exact sizes and proportions of these splits will depend on the final size of the
dataset.
## Dataset Creation
### Curation Rationale
The dataset is being created to train a model capable of generating explanations
and examples in the context of various mathematical and computer science topics.
The goal is to create an AI assistant that can provide clear, accurate, and
pedagogically sound responses to user queries on these topics.
### Source Data
#### Initial Data Collection and Normalization
The data is collected from a variety of textbooks covering arithmetic, algebra,
geometry, trigonometry, calculus, algorithms and data structures, design
patterns, and the Python programming language. The textbooks used include:
- Barron's Arithmetic The Easy Way Fourth Edition
- Blitzer Introductory Algebra for College Students Fifth Edition
- McDougal Littell Geometry
- Blitzer Intermediate Algebra for College Students 5th Edition
- Trigonometry Sixth Edition
- Pearson College Algebra Fourth Edition
- Hughes-Hallet Applied Calculus 5th Edition
- CLRS Introduction to Algorithms Third Edition
In addition to the textbooks, the dataset also includes material from the
following online resources:
- [C reference](https://en.cppreference.com/w/c)
- [Cpp reference](https://en.cppreference.com/w/cpp)
- [Python Standard Library](https://docs.python.org/3/)
These resources provide up-to-date information and examples for the C, C++, and
Python programming languages. The creators of the Cppreference site also provide
[archives](https://en.cppreference.com/w/Cppreference:Archives) of their site
for offline use. Code samples synthesized by OpenAI's GPT models, curated by the
dataset creator, are also included in the dataset.
**Note:** The creator of this dataset owns physical copies of all the textbooks
listed above. The data from these sources are transformed into a dialogue format
using OpenAI's GPT-3.5 and GPT-4 models. The resulting dialogues are then used
as the training data for the phi-1 model. This dataset does not include the full
content of the source textbooks. Instead, it consists of transformations and
syntheses of the original content. Anyone who wants access to the full original
content should purchase or otherwise legally access the textbooks themselves.
#### Who are the source language producers?
The original language data was created by a variety of authors and educators,
who wrote the textbooks and other materials used as sources for this dataset.
These include:
- Barron's Arithmetic The Easy Way Fourth Edition - Edward Williams, Katie
Prindle
- Blitzer Introductory Algebra for College Students Fifth Edition - Robert
Blitzer
- McDougal Littell Geometry - Ron Larson, Laurie Boswell, Timothy D. Kanold, Lee
Stiff
- Blitzer Intermediate Algebra for College Students 5th Edition - Robert Blitzer
- Trigonometry Sixth Edition - Charles P. McKeague, Mark D. Turner
- Pearson College Algebra Fourth Edition - Robert F. Blitzer
- Hughes-Hallet Applied Calculus 5th Edition - Deborah Hughes-Hallett, Andrew M.
Gleason, Patti Frazer Lock, Daniel E. Flath, Sheldon P. Gordon, David O.
Lomen, David Lovelock, William G. McCallum, Brad G. Osgood, Andrew Pasquale,
Jeff Tecosky-Feldman, Joseph Thrash, Karen R. Rhea, Thomas W. Tucker
- CLRS Introduction to Algorithms Third Edition - Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest, Clifford Stein
In addition to these authors, the developers of OpenAI's GPT-3.5 and GPT-4
models also contributed to the creation of the language data, as these models
were used to transform the source material into a dialogue format.
### Annotations
#### Annotation process
The dataset does not contain any explicit annotations. However, the data is
curated and synthesized using OpenAI's GPT-3.5 and GPT-4 models. The process
involves transforming the source material into a dialogue format suitable for
training the phi-1 model. The dataset creator, an independent learner with a
strong interest in computer science, reviewed and curated the synthesized
dialogues to ensure their quality and relevance.
#### Who are the annotators?
The dataset creator, an independent learner who has studied computer science
extensively in a self-directed manner, performed the curation and review of the
synthesized dialogues.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information. All the data
is derived from publicly available textbooks and online resources. Any names or
other potential identifiers in the source material have been removed or
anonymized.
### Social Impact of Dataset
The dataset is intended to support the development of AI models capable of
providing detailed explanations and examples in the context of arithmetic,
algebra, geometry, trigonometry, calculus, algorithms and data structures,
design patterns, and the python programming language. The potential social
impact is significant, as such models could greatly enhance self-directed
learning and provide valuable educational support to students worldwide.
However, it's important to note that the quality and usefulness of the AI models
trained on this dataset will depend on the quality of the data itself. If the
data is inaccurate or biased, the models could propagate these inaccuracies and
biases, potentially leading to misinformation or unfair outcomes.
### Discussion of Biases
The dataset is based on a variety of textbooks and online resources, which may
contain their own inherent biases. For example, textbooks often reflect the
perspectives and biases of their authors, which can influence the way
information is presented. These biases could potentially be reflected in the
dataset and in any models trained on it.
### Other Known Limitations
At this stage of the dataset creation process, it's difficult to identify all
potential limitations. However, one potential limitation is that the dataset may
not cover all possible topics or perspectives within the fields it addresses.
The dataset creator will continue to monitor and assess the dataset for
limitations as the work progresses.
## Additional Information
### Dataset Curators
The dataset was curated by an independent learner with a strong interest in
computer science. The curator has studied the subject matter in a self-directed
manner, using a variety of resources including textbooks and online materials.
The curation process also involved the use of OpenAI's GPT-3.5 and GPT-4 models
to synthesize dialogues based on the source material.
### Licensing Information
This dataset is released under the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 International (CC BY-NC-SA 3.0)
license.
### Citation Information
As this dataset is a compilation of various sources synthesized and curated for
the purpose of training the phi-1 model, please ensure to cite the original
sources when using this dataset. If referencing the dataset directly, please
refer to this repository.
| 10,687 | [
[
-0.0263519287109375,
-0.06622314453125,
0.006877899169921875,
0.01287841796875,
-0.011932373046875,
-0.0287322998046875,
-0.008819580078125,
-0.0204620361328125,
-0.004913330078125,
0.032806396484375,
-0.049591064453125,
-0.033203125,
-0.028167724609375,
0.0... |
oscar-corpus/colossal-oscar-1.0 | 2023-10-26T14:58:28.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:original",
"license:cc0-1.0",
"arxiv:2212.10440",
"arxiv:2010.14571",
"region:us"
] | oscar-corpus | null | null | 17 | 16 | 2023-07-11T15:00:56 | ---
license: cc0-1.0
size_categories:
- n>1T
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
extra_gated_prompt: "By filling the form below I understand that Colossal OSCAR 1 is just a partial annotation of the WET files of 10 Common Crawl snapshots, the original data is included here **only for convenience**, and specially for researchers looking for data in lower resource languages. **Only the annotations are distributed under a cc0-1.0 license**, for the rest of the content I have read the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/) and I will abide by them. I understand that all uses of the textual content in Colossal OSCAR 1 are subject to the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/). I understand that reusing the textual content in Colossal OSCAR 1 might not be legal in all countries/regions and for all use cases. I understand that Colossal OSCAR 1 is mainly targeted towards researchers and meant to be used in research. The OSCAR Project reserves the right to revoke my access to this data. The OSCAR Project reserves the right to modify this data at any time in accordance to take down requests."
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
Usecase: text
I have explicitly checked that downloading Colossal OSCAR 1 is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the Common Crawl Terms of use: checkbox
---
# Dataset Card for Colossal OSCAR 1
## IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS
## Table of Contents
- [Dataset Card for Colossal OSCAR 1](#dataset-card-for-colossal-oscar-1)
- [IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS](#important-note-this-dataset-card-is-still-being-written-please-be-patient-while-we-complete-all-the-information-about-the-corpus)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Issues](#issues)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Layout](#layout)
- [Data Splits](#data-splits)
- [Table](#table)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://oscar-project.org](https://oscar-project.org)
- **Repository:** [https://github.com/oscar-project](https://github.com/oscar-project)
- **Papers:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://aclanthology.org/2022.lrec-1.463/), [Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data](https://arxiv.org/abs/2212.10440)
- **Point of Contact:** [Contact](https://oscar-project.org/#contact)
### Dataset Summary
The OSCAR project (**O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed [high-performance data pipelines](https://github.com/oscar-corpus/ungoliant) specifically conceived to classify and filter large amounts of [web data](https://commoncrawl.org/). The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible.
Colossal OSCAR 1 is the largest release of the OSCAR Corpus based on the based on 10 different monthly snapshots of Common Crawl. It currently contains all the features present in OSCAR 23.01, the main difference being its size.
### Downloading the Data
For the moment we haven't finished the python script to use Colossal OSCAR 1 with `datasets`, so we recommend you use the `huggingface_hub` [python library](https://huggingface.co/docs/huggingface_hub/index).
If you want to download a considerable amount of data we recomend you use `hf_transfer` python package and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pre-train language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
Colossal OSCAR 1 may have quality issues on low size subcorpora, as it has been the case before. Please consider taking a look at [_Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets_](https://aclanthology.org/2022.tacl-1.4/) to get a better understanding of the current limitations of our language classifier.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
| Language code | Language | Issues |
| ------------- | -------- | ------ |
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Layout
```js
{
"content":"English sentence\nphrase en français\n????????????", // (1)
"warc_headers":{ // (2)
"warc-identified-content-language":"fra,eng",
"warc-target-uri":"https://fr.wikipedia.org/wiki/...",
"warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>",
"warc-type":"conversion",
"content-length":"35298", // (3)
"warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>",
"warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3)
"warc-date":"2022-11-26T09:45:47Z",
"content-type":"text/plain"
},
"metadata":{
"identification":{ // (4)
"label":"fr",
"prob":0.8938327
},
"harmful_pp":4063.1814, // (5)
"tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6)
"quality_warnings":[ // (7)
"short_sentences",
"header",
"footer"
],
"categories":[ // (8)
"examen_pix",
"liste_bu"
],
"sentence_identifications":[ // (9)
{
"label":"fr",
"prob":0.99837273
},
{
"label":"en",
"prob":0.9992377
},
null
]
}
}
```
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-project.org)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organization's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of Colossal OSCAR 1 the following snapshots were used:
- 05-06-23
- 06-07-22
- 11-12-21
- 10-20
- 05-06-20
- 05-19
- 11-18
- 11-17
- 03-15
- 09-16
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of languages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models. We have added annotations to Common Crawl, so please consider using them to select the data that you would like to use for your particular use case.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource languages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
Colossal OSCAR 1 was put together by [Pedro Ortiz Suarez](https://portizs.eu/) while working as a researcher at the [Speech and Language Technology Team](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology) at [DFKI GmbH](https://www.dfki.de/en/web) Berlin. This release is also made possible do to the work of [Julien Abadji](https://ujj.space) and the continous funding of the OSCAR project by [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)).
Colossal OSCAR 1 is part of the work done by [Pedro Ortiz Suarez](https://portizs.eu/) for the [OpenGPT-X Project](https://opengpt-x.de/en/) which is funded by the German Federal Ministry for Economic Affairs and Climate Action ([BMWK](https://www.bmwk.de/Navigation/EN/Home/home.html)). The authors gratefully acknowledge the [Gauss Centre for Supercomputing e.V.](www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at the Jülich Supercomputing Centre (JSC).
This release of OSCAR was also made possible by the continous support of the OSCAR team at [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)), specially by [Julien Abadji](https://ujj.space), [Rua Ismail](https://oscar-project.org/authors/rua/) and [Benoit Sagot](http://pauillac.inria.fr/~sagot/), as well as by members of the OSCAR community, in particular [Sotaro Takeshita](https://sotaro.io/about), [Sebastian Nagel](https://www.polver.uni-konstanz.de/cnc/people/nagel/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, the OSCAR project, DFKI GmbH and Inria have waived all copyright and related or neighboring rights to OSCAR
This work is published from: France and Germany.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources.
Please use the [contact information](https://oscar-project.org/#contact) on our website for take down requests.
We strongly advise users to submit take down request to Common Crawl. For more information please read their [Terms of Use](https://commoncrawl.org/terms-of-use/)
### Citation Information
```
@ARTICLE{2022arXiv221210440J,
author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro},
title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = dec,
eid = {arXiv:2212.10440},
pages = {arXiv:2212.10440},
doi = {10.48550/arXiv.2212.10440},
archivePrefix = {arXiv},
eprint = {2212.10440},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{abadji-etal-2022-towards,
title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus",
author = "Abadji, Julien and
Ortiz Suarez, Pedro and
Romary, Laurent and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.463",
pages = "4344--4355",
abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.",
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{kreutzer-etal-2022-quality,
title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets",
author = {Kreutzer, Julia and
Caswell, Isaac and
Wang, Lisa and
Wahab, Ahsan and
van Esch, Daan and
Ulzii-Orshikh, Nasanbayar and
Tapo, Allahsera and
Subramani, Nishant and
Sokolov, Artem and
Sikasote, Claytone and
Setyawan, Monang and
Sarin, Supheakmungkol and
Samb, Sokhar and
Sagot, Beno{\^\i}t and
Rivera, Clara and
Rios, Annette and
Papadimitriou, Isabel and
Osei, Salomey and
Suarez, Pedro Ortiz and
Orife, Iroro and
Ogueji, Kelechi and
Rubungo, Andre Niyongabo and
Nguyen, Toan Q. and
M{\"u}ller, Mathias and
M{\"u}ller, Andr{\'e} and
Muhammad, Shamsuddeen Hassan and
Muhammad, Nanda and
Mnyakeni, Ayanda and
Mirzakhalov, Jamshidbek and
Matangira, Tapiwanashe and
Leong, Colin and
Lawson, Nze and
Kudugunta, Sneha and
Jernite, Yacine and
Jenny, Mathias and
Firat, Orhan and
Dossou, Bonaventure F. P. and
Dlamini, Sakhile and
de Silva, Nisansa and
{\c{C}}abuk Ball{\i}, Sakine and
Biderman, Stella and
Battisti, Alessia and
Baruwa, Ahmed and
Bapna, Ankur and
Baljekar, Pallavi and
Azime, Israel Abebe and
Awokoya, Ayodele and
Ataman, Duygu and
Ahia, Orevaoghene and
Ahia, Oghenefego and
Agrawal, Sweta and
Adeyemi, Mofetoluwa},
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.4",
doi = "10.1162/tacl_a_00447",
pages = "50--72",
abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.",
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
``` | 25,560 | [
[
-0.04522705078125,
-0.041717529296875,
0.01207733154296875,
0.007274627685546875,
-0.023712158203125,
0.00928497314453125,
-0.0101318359375,
-0.0384521484375,
0.050445556640625,
0.0300750732421875,
-0.0238800048828125,
-0.058502197265625,
-0.052642822265625,
... |
health360/Healix-V1 | 2023-07-19T15:16:02.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:odc-by",
"biology",
"medical",
"region:us"
] | health360 | null | null | 1 | 16 | 2023-07-19T01:07:19 | ---
license: odc-by
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 427613608
num_examples: 796239
download_size: 213902701
dataset_size: 427613608
language:
- en
tags:
- biology
- medical
size_categories:
- 100K<n<1M
---
# Healix-V1 Dataset
## Description
Healix-V1 is a rich and diverse dataset consisting of 809k Question-Answer pairs within the medical domain. This dataset has been meticulously curated to fuel research initiatives in the areas of medical language understanding, medical dialogue systems, and knowledge extraction. Healix-V1 serves as a valuable resource for developing and improving machine learning models for healthcare applications, enabling them to understand and generate human-like responses in medical context The dataset follows the format used in ALPACA model fine-tuning:
```plaintext
### Input:
Question
### Response:
Answer
## Data Sources
The dataset has been compiled from a variety of valuable and authoritative sources, each contributing different kinds of medical question-answer pairs:
1. **Medical books**: 426,241 QA pairs - These pairs are derived from an array of reputable medical books. The questions were extracted and provided as prompts to GPT-3.5, which in turn generated the corresponding answers.
2. **[jianghc/medical_chatbot](URL)**: 46,867 QA pairs - This is a dataset derived from a medical chatbot project.
3. **The Medical Question and Answering dataset(MQuAD)**: 23,802 QA pairs - MQuAD is a medical dataset specifically designed for the task of question answering.
4. **PubMed**: 1,000 QA pairs - These are pairs extracted from the extensive library of medical articles on PubMed.
5. **GenMedGPT**: 5,000 QA pairs - Derived from the GenMedGPT project aimed at generating medical language.
6. **iCliniq**: 7,321 QA pairs - iCliniq is a platform where users ask health-related questions which are answered by certified doctors.
7. **HealthCareMagic**: 100,000 QA pairs - HealthCareMagic is an interactive health platform with a vast amount of user-generated medical QAs.
8. **medical_meadow_wikidoc**: 10,000 QA pairs - These pairs are extracted from WikiDoc, a free medical textbook.
9. **medical_meadow_wikidoc_medical_flashcards**: 33,955 QA pairs - Medical flashcards provide concise medical information in a Q&A format.
10. **MedQA-USMLE-4-options**: 10,178 QA pairs - These are QAs similar to the format of the USMLE exam for medical licensing in the U.S.
## Potential Applications
Healix-V1 can serve a multitude of purposes such as:
- Training AI models for medical chatbots
- Developing advanced search engines for medical databases
- Creating tutoring systems for medical students
- Enhancing automated patient assistance systems
- Helping in developing systems for medical examination preparation
## Data Length Distribution
- (0.0, 256.0]: 96.724181%
- (256.0, 512.0]: 2.903792%
- (512.0, 768.0]: 0.299476%
- (768.0, 1024.0]: 0.050675%
- (1024.0, 2048.0]: 0.018910%
## Metadata
- **License:** ODC-BY
- **Language:** English
- **Tags:** Biology, Medical
- **Size Categories:** 100K<n<1M
## Dataset Info
- **Features:**
- name: text
- dtype: string
- **Splits:**
- name: train
- num_bytes: 419605911
- num_examples: 798902
- **Download Size:** 209261302 bytes
- **Dataset Size:** 419605911 bytes | 3,370 | [
[
-0.02874755859375,
-0.044281005859375,
0.041595458984375,
0.0038242340087890625,
-0.018310546875,
-0.0116729736328125,
0.0220794677734375,
-0.02789306640625,
0.03765869140625,
0.04376220703125,
-0.04302978515625,
-0.05230712890625,
-0.032989501953125,
0.0090... |
ChanceFocus/flare-convfinqa | 2023-07-31T03:49:30.000Z | [
"region:us"
] | ChanceFocus | null | null | 2 | 16 | 2023-07-31T03:49:18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: turn
dtype: int64
- name: dialogue_id
dtype: int64
splits:
- name: train
num_bytes: 44382083
num_examples: 8891
- name: valid
num_bytes: 11171617
num_examples: 2213
- name: test
num_bytes: 7116753
num_examples: 1490
download_size: 11803908
dataset_size: 62670453
---
# Dataset Card for "flare-convfinqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 622 | [
[
-0.05657958984375,
-0.01151275634765625,
-0.0021839141845703125,
0.0131988525390625,
-0.00921630859375,
0.0131988525390625,
0.0166778564453125,
-0.010650634765625,
0.06121826171875,
0.033203125,
-0.058990478515625,
-0.044708251953125,
-0.030364990234375,
-0.... |
TitanMLData/arxiv_qa | 2023-08-04T11:38:53.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | TitanMLData | null | null | 1 | 16 | 2023-08-04T11:01:34 | ---
task_categories:
- question-answering
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Arxiv Paper Generative Question Answering
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is made using ChatGPT (text-davinci-003) to generate Question/Answer pairs from Arxiv papers from [this dataset](https://huggingface.co/datasets/ccdv/arxiv-summarization)
### Data Fields
* TextID: references the datarow (paper) in the arxiv summarizer dataset
* Question: question based on the text
* Response: answer
* Text: Full text with the paper as 'context:' and and the question appended as 'question:'. Used for generative question answering usign language modelling
### Data Splits
This dataset contains 2 splits: _train_, and _validation_
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 32,392 |
| Validation | 6,479 | | 1,029 | [
[
-0.03076171875,
-0.056915283203125,
0.00914764404296875,
0.01552581787109375,
-0.01593017578125,
-0.0014743804931640625,
0.023345947265625,
0.006023406982421875,
-0.0067901611328125,
0.0278472900390625,
-0.0380859375,
-0.03521728515625,
-0.0233917236328125,
... |
morizon/databricks-dolly-15k-ja | 2023-08-08T13:59:26.000Z | [
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] | morizon | null | null | 1 | 16 | 2023-08-06T05:17:38 | ---
license: cc-by-sa-3.0
language:
- ja
---
このデータセットは[kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)を元に作成されています。
また、[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)の情報も参考にしました。
### 主な修正点
- [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k#dataset-overview)に注意事項として、注釈は削除した方が良いとの以下記載があり、注釈を削除しています。
Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
なお注釈の削除については、正規表現を用いた修正を行っております。https://github.com/yuichiro2023/normalize_text
- 重複した内容の行が複数あり、削除しました。'instruction','input','output’がすべて一致している場合や'input','output’が一致している場合がありました。
- inputが ”空白” 、outputが ”はあ” となっているデータが複数あり、修正しました。 | 862 | [
[
-0.0307159423828125,
-0.06842041015625,
0.0010347366333007812,
0.03863525390625,
-0.049102783203125,
-0.027099609375,
-0.00815582275390625,
0.0007023811340332031,
0.0399169921875,
0.0236053466796875,
-0.0517578125,
-0.048858642578125,
-0.050323486328125,
0.0... |
augtoma/usmle_step_3 | 2023-08-11T21:25:10.000Z | [
"region:us"
] | augtoma | null | null | 0 | 16 | 2023-08-11T21:25:04 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 156286
num_examples: 122
download_size: 98164
dataset_size: 156286
---
# Dataset Card for "usmle_self_eval_step3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 787 | [
[
-0.0234222412109375,
-0.0182037353515625,
0.02862548828125,
0.0219268798828125,
-0.00946807861328125,
0.00458526611328125,
0.03729248046875,
0.003803253173828125,
0.0306396484375,
0.0391845703125,
-0.050323486328125,
-0.060455322265625,
-0.028045654296875,
0... |
Intel/VALERIE22 | 2023-10-26T14:55:14.000Z | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"task_ids:semantic-segmentation",
"task_ids:instance-segmentation",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"automotive",
"autonomous driving",
"synthetic",
"safe ai",
"validation",
"pedestrian detection",
"2d... | Intel | The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs. | tba | 2 | 16 | 2023-08-14T09:17:25 | ---
license: cc-by-4.0
task_categories:
- image-segmentation
- object-detection
task_ids:
- semantic-segmentation
- instance-segmentation
tags:
- automotive
- autonomous driving
- synthetic
- safe ai
- validation
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
pretty_name: VALERIE22
size_categories:
- 1K<n<10K
---
# VALERIE22 - A photorealistic, richly metadata annotated dataset of urban environments
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/teaser_c.png">
## Dataset Description
- **Paper:** https://arxiv.org/abs/2308.09632
- **Point of Contact:** korbinian.hagn@intel.com
### Dataset Summary
The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline (see image below) providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/VALERIE_overview1.png">
Each sequence of the dataset contains for each scene two rendered images. One is rendered with the default Blender tonemapping (/png) whereas the second is renderd with our photorealistic sensor simulation (see hagn2022optimized). The image below shows the difference of the two methods.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/SensorSimulation.png">
Following are some example images showing the unique characteristics of the different sequences.
|Sequence0052|Sequence0054|Sequence0057|Sequence0058|
|:---:|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq52_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq54_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq57_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq58_1.png" width="500">|
|Sequence0059|Sequence0060|Sequence0062|
|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq59_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq60_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq62_1.jpg" width="500">|
### Supported Tasks
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
- ai-validation
## Dataset Structure
```
VALERIE22
└───intel_results_sequence_0050
│ └───ground-truth
│ │ └───2d-bounding-box_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ └───3d-bounding-box_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ └───class-id_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ └───general-globally-per-frame-analysis_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ │ └───car-camera000-0000-{UUID}-0000.csv
│ │ └───semantic-group-segmentation_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ └───semantic-instance-segmentation_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ │ └───car-camera000-0000-{UUID}-0000
│ │ │ │ └───{Entity-ID}
│ └───sensor
│ │ └───camera
│ │ │ └───left
│ │ │ │ └───png
│ │ │ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ │ │ └───png_distorted
│ │ │ │ │ └───car-camera000-0000-{UUID}-0000.png
└───intel_results_sequence_0052
└───intel_results_sequence_0054
└───intel_results_sequence_0057
└───intel_results_sequence_0058
└───intel_results_sequence_0059
└───intel_results_sequence_0060
└───intel_results_sequence_0062
```
### Data Splits
13476 images for trainining:
```
dataset = load_dataset("Intel/VALERIE22", split="train")
```
8406 images for validation and test:
```
dataset = load_dataset("Intel/VALERIE22", split="validation")
dataset = load_dataset("Intel/VALERIE22", split="test")
```
### Licensing Information
CC BY 4.0
### Citation Information
Relevant publications:
```
@misc{grau2023valerie22,
title={VALERIE22 -- A photorealistic, richly metadata annotated dataset of urban environments},
author={Oliver Grau and Korbinian Hagn},
year={2023},
eprint={2308.09632},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{hagn2022increasing,
title={Increasing pedestrian detection performance through weighting of detection impairing factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Proceedings of the 6th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2022}
}
@inproceedings{hagn2022validation,
title={Validation of Pedestrian Detectors by Classification of Visual Detection Impairing Factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={European Conference on Computer Vision},
pages={476--491},
year={2022},
organization={Springer}
}
@incollection{grau2022variational,
title={A variational deep synthesis approach for perception validation},
author={Grau, Oliver and Hagn, Korbinian and Syed Sha, Qutub},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={359--381},
year={2022},
publisher={Springer International Publishing Cham}
}
@incollection{hagn2022optimized,
title={Optimized data synthesis for DNN training and validation by sensor artifact simulation},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={127--147},
year={2022},
publisher={Springer International Publishing Cham}
}
@inproceedings{syed2020dnn,
title={DNN analysis through synthetic data variation},
author={Syed Sha, Qutub and Grau, Oliver and Hagn, Korbinian},
booktitle={Proceedings of the 4th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2020}
}
``` | 6,382 | [
[
-0.0518798828125,
-0.040771484375,
0.039703369140625,
-0.015106201171875,
-0.0201416015625,
0.0118408203125,
-0.0014171600341796875,
-0.047210693359375,
0.01027679443359375,
0.0083770751953125,
-0.05206298828125,
-0.05096435546875,
-0.027313232421875,
-0.005... |
luisroque/instruct-python-llama2-20k | 2023-08-18T09:44:00.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | luisroque | null | null | 0 | 16 | 2023-08-17T17:59:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34661192.7
num_examples: 19000
- name: test
num_bytes: 1824273.3
num_examples: 1000
download_size: 19060329
dataset_size: 36485466
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 10K<n<100K
---
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) | 2,332 | [
[
-0.0208740234375,
-0.055572509765625,
0.0211334228515625,
0.01495361328125,
-0.01389312744140625,
-0.00908660888671875,
-0.01203155517578125,
-0.0251007080078125,
-0.007785797119140625,
0.04547119140625,
-0.0628662109375,
-0.0390625,
-0.03314208984375,
0.020... |
rajuptvs/English-to-hindi-podcast-translation | 2023-08-18T20:07:47.000Z | [
"region:us"
] | rajuptvs | null | null | 0 | 16 | 2023-08-18T20:07:42 | ---
dataset_info:
features:
- name: video_id
dtype: string
- name: English subtitles
dtype: string
- name: Hindi subtitles
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1827416
num_examples: 11427
download_size: 784942
dataset_size: 1827416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en-hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.055023193359375,
-0.0294342041015625,
0.007472991943359375,
0.0143585205078125,
-0.010528564453125,
-0.00823974609375,
0.0119781494140625,
-0.035675048828125,
0.07989501953125,
0.04107666015625,
-0.053497314453125,
-0.052520751953125,
-0.041351318359375,
... |
tasksource/data | 2023-09-12T07:38:43.000Z | [
"license:other",
"region:us"
] | tasksource | null | null | 1 | 16 | 2023-08-24T15:10:59 | ---
license: other
---
# Tasksource unified loader
```python
load_dataset('tasksource/data', "glue/rte",max_rows=30_00)
``` | 124 | [
[
-0.0088653564453125,
-0.0116729736328125,
-0.00484466552734375,
0.049560546875,
-0.005222320556640625,
-0.022216796875,
-0.01056671142578125,
0.0195770263671875,
0.00116729736328125,
0.050201416015625,
-0.0400390625,
-0.018646240234375,
-0.036590576171875,
0... |
silk-road/Chat-Haruhi-Fusion-A_B | 2023-08-24T16:47:29.000Z | [
"region:us"
] | silk-road | null | null | 3 | 16 | 2023-08-24T16:46:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: context
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 259951538
num_examples: 66519
download_size: 0
dataset_size: 259951538
---
# Dataset Card for "Chat-Haruhi-Fusion-A_B"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 487 | [
[
-0.034637451171875,
-0.0302734375,
-0.00014126300811767578,
0.0172576904296875,
-0.0142364501953125,
0.01495361328125,
0.01082611083984375,
-0.018951416015625,
0.07598876953125,
0.0333251953125,
-0.069580078125,
-0.041748046875,
-0.03564453125,
-0.0264587402... |
factored/saleswiz_is_relevant | 2023-09-14T20:01:32.000Z | [
"region:us"
] | factored | null | null | 0 | 16 | 2023-09-01T19:00:35 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 187533.28961748633
num_examples: 640
- name: validation
num_bytes: 80580.71038251366
num_examples: 275
download_size: 178216
dataset_size: 268114.0
---
# Dataset Card for "saleswiz_is_relevant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.02447509765625,
-0.0230560302734375,
-0.002834320068359375,
0.0272064208984375,
-0.0152740478515625,
-0.017120361328125,
0.00791168212890625,
-0.01166534423828125,
0.043609619140625,
0.039520263671875,
-0.064208984375,
-0.05487060546875,
-0.0091094970703125,
... |
DynamicSuperb/SpeakerVerification_LibriSpeech-TestClean | 2023-11-01T08:24:55.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 16 | 2023-09-02T08:58:14 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: audio2
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1498410406.0
num_examples: 5000
download_size: 691287710
dataset_size: 1498410406.0
---
# Dataset Card for "SpeakerVerification_LibriSpeechTestClean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 652 | [
[
-0.05389404296875,
-0.0163421630859375,
0.01294708251953125,
0.009918212890625,
-0.006069183349609375,
-0.0048065185546875,
-0.01519012451171875,
-0.00795745849609375,
0.06707763671875,
0.030181884765625,
-0.057037353515625,
-0.048980712890625,
-0.03439331054687... |
fiveflow/psychology-dataset | 2023-09-05T05:21:51.000Z | [
"region:us"
] | fiveflow | null | null | 0 | 16 | 2023-09-04T07:22:56 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: question
dtype: string
- name: type
dtype: string
- name: keywords
dtype: string
- name: resp
dtype: string
- name: new_resp
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3612449
num_examples: 2710
download_size: 1189445
dataset_size: 3612449
---
# Dataset Card for "psychology-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.049346923828125,
-0.0094757080078125,
0.0263519287109375,
0.03924560546875,
0.0037689208984375,
-0.004756927490234375,
-0.0007658004760742188,
-0.0156097412109375,
0.057159423828125,
0.01375579833984375,
-0.08343505859375,
-0.04705810546875,
-0.04660034179687... |
BrunoGR/Emo_support_11kBalanced | 2023-09-19T22:54:23.000Z | [
"region:us"
] | BrunoGR | null | null | 0 | 16 | 2023-09-11T02:40:22 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: texto
dtype: string
- name: etiqueta
dtype: string
splits:
- name: test
num_bytes: 152156
num_examples: 1309
- name: train
num_bytes: 12765622
num_examples: 121708
- name: validation
num_bytes: 253200
num_examples: 2200
download_size: 8063810
dataset_size: 13170978
---
# Dataset Card for "Emo_support_11kBalanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 707 | [
[
-0.045623779296875,
-0.018463134765625,
0.00847625732421875,
0.0312347412109375,
-0.0173492431640625,
0.00394439697265625,
0.0166168212890625,
-0.011077880859375,
0.07623291015625,
0.035888671875,
-0.06036376953125,
-0.050323486328125,
-0.045867919921875,
0.... |
danlou/safespace-8877-20230920 | 2023-09-20T15:10:39.000Z | [
"region:us"
] | danlou | null | null | 0 | 16 | 2023-09-20T15:09:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ticoAg/zhihu_3k_rlhf_train | 2023-09-21T09:53:46.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"region:us"
] | ticoAg | null | null | 0 | 16 | 2023-09-21T03:21:09 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- zh
size_categories:
- 1K<n<10K
---
# Note
> some rm data from public dataset
- format
```json
{
"history": [
"query1", "answer1",
"query2", "answer2"
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k) | 700 | [
[
-0.0189666748046875,
-0.034820556640625,
0.029876708984375,
0.01515960693359375,
-0.018218994140625,
-0.0164337158203125,
-0.01142120361328125,
0.0065155029296875,
0.0489501953125,
0.051361083984375,
-0.050994873046875,
-0.062225341796875,
-0.042449951171875,
... |
qgyd2021/chinese_chitchat | 2023-09-22T08:39:11.000Z | [
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"chitchat",
"region:us"
] | qgyd2021 | null | @dataset{chinese_chitchat,
author = {Xing Tian},
title = {chinese_chitchat},
month = sep,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | 3 | 16 | 2023-09-22T02:24:54 | ---
license: apache-2.0
language:
- zh
tags:
- chitchat
size_categories:
- 100M<n<1B
---
## 中文闲聊数据集
role 的取值有: "unknown", "human", "assistant", 三种.
数据集从网上收集整理如下:
| 数据 | 原始数据/项目地址 | 样本个数 | 语料描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: |
| ChatterBot | [ChatterBot](https://github.com/gunthercox/ChatterBot); [chatterbot-corpus](https://github.com/gunthercox/chatterbot-corpus) | 560 | 按类型分类,质量较高 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| douban | [Douban Conversation Corpus](https://github.com/MarkWuNLP/MultiTurnResponseSelection) | 352W | 来自北航和微软的paper, 噪音相对较少, 多轮(平均7.6轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| ptt | [PTT中文語料](https://github.com/zake7749/Gossiping-Chinese-Corpus) | 77W | 开源项目, 台湾PTT论坛八卦版, 繁体, 语料较生活化, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| qingyun | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 10W | 青云语料, 相对不错, 生活化 | |
| subtitle | [电视剧对白语料](https://github.com/aceimnorstuvwxz/dgk_lost_conv) | 274W | 来自爬取的电影和美剧的字幕, 有一些噪音, 不严谨的对话, 说话人无法对应起来, 多轮(平均5.3轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| tieba | [贴吧论坛回帖语料](https://pan.baidu.com/s/1mUknfwy1nhSM7XzH8xi7gQ); 密码:i4si | 232W | 多轮, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| weibo | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 443W | 来自华为的paper | |
| xiaohuangji | [小黄鸡语料](https://github.com/candlewill/Dialog_Corpus) | 45W | 原人人网项目语料, 有一些不雅对话, 少量噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
<details>
<summary>参考的数据来源,展开查看</summary>
<pre>
<code>
https://github.com/codemayq/chinese_chatbot_corpus
https://github.com/yangjianxin1/GPT2-chitchat
</code>
</pre>
</details>
| 1,787 | [
[
-0.024627685546875,
-0.05810546875,
0.0186004638671875,
0.037872314453125,
-0.01666259765625,
-0.0022640228271484375,
-0.01457977294921875,
-0.0276641845703125,
0.049468994140625,
0.02386474609375,
-0.03289794921875,
-0.0341796875,
-0.032745361328125,
0.0127... |
acrastt/EverythingLM-V3-ShareGPT | 2023-10-24T22:18:47.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | acrastt | null | null | 2 | 16 | 2023-09-25T23:08:51 | ---
license: mit
language:
- en
pretty_name: Everything-V3-
size_categories:
- 1K<n<10K
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
[EverythingLM V3 Data](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) converted to ShareGPT format. | 436 | [
[
-0.027130126953125,
-0.058135986328125,
0.033538818359375,
0.033477783203125,
-0.03521728515625,
-0.030670166015625,
0.0008721351623535156,
-0.039581298828125,
0.052642822265625,
0.0213165283203125,
-0.047393798828125,
-0.052459716796875,
-0.041839599609375,
... |
erhwenkuo/multiturn_chat_0.8m-chinese-zhtw | 2023-09-27T15:41:32.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] | erhwenkuo | null | null | 2 | 16 | 2023-09-27T15:07:30 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 956384448
num_examples: 831036
download_size: 588338923
dataset_size: 956384448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- zh
tags:
- alpaca
- fine-tune
size_categories:
- 100K<n<1M
---
# Dataset Card for "multiturn_chat_0.8m-chinese-zhtw"
## 內容
包含約 80 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案所產生的 *Human* 與 *Assistant* 的多輪對話。
注意:此資料集是由 ChatGPT 產生的,未經嚴格校驗,內容可能包含錯誤。使用過程中請注意這一點。
**instruction** 中包含多輪對話的上文內容,以 *Human:* 和 *Assistant:* 區分,**output** 中包含當前 *Assistant* 角色的回答。
## 範例
```
{
"instruction": "Human:請根據題材「健身房」,寫一篇800字的文章。 \nAssistant:好的,以下是由我生成的800字「健身房」主題文章:\n在現代社會,人們越來越珍惜健康和身材。健身房就是一個人們可以鍛鍊身體的場所,也是一個社交的場所。在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \n對於喜歡肌力訓練的人來說,選擇健身房的一個原因是因為那裡的器材更加完備,可以滿足訓練的需求。當然,不只是體力上的訓練,健身房也是鍛鍊毅力和耐力的地方。因為每一次運動的過程都需要堅持和勇氣。 \n而對於那些想要減肥的人來說,健身房也是非常重要的。有氧運動例如慢跑,跳操,游泳等都是非常有效的減肥方式。而且因為健身房的壓力比較大,所以很容易讓人保持運動的熱情和動力。 \n有些人則會選擇瑜珈館作為健身的場所,因為它們通常比較安靜,能夠讓人們放鬆。瑜珈有紓解壓力,提升柔軟度,強化肌肉的好處,而且更適合不喜歡高強度訓練的人。 \n總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。無論您是肌力訓練愛好者,減重追求者還是想要放鬆的人,您都可以在這裡找到適合自己的運動方式。 \nHuman:請從上文摘取三個句子進行評分。 \nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三個句子:\n1. \"在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \"\n2. \"而對於想要減肥的人來說,健身房也是非常重要的。 \"\n3. \"總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。 \"\n網頁端評分介面還沒準備好,不能在這裡為你進行打分,請見諒。"
}
```
### 欄位
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 限制和使用限制
我們要求開發者僅將我們開源的程式碼、資料、模型及後續衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
由於數據是由*ChatGPT*產生的,未經嚴格驗證,在事實性和其他方面仍有一些不足之處。因此,在使用此資料集時,請務必注意甄別。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集帶來的任何損害、糾紛,本專案的開發者不承擔任何責任。
***
# Multiturn Chat 0.8M
## Contents
Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.
**instruction** contains history dialog context, distinguishable by *Human:* and *Assistant:*, **output** contains the current reply by *assistant*.
## Sample
```
{
"instruction": "Human:請根據題材「健身房」,寫一篇800字的文章。 \nAssistant:好的,以下是由我生成的800字「健身房」主題文章:\n在現代社會,人們越來越珍惜健康和身材。健身房就是一個人們可以鍛鍊身體的場所,也是一個社交的場所。在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \n對於喜歡肌力訓練的人來說,選擇健身房的一個原因是因為那裡的器材更加完備,可以滿足訓練的需求。當然,不只是體力上的訓練,健身房也是鍛鍊毅力和耐力的地方。因為每一次運動的過程都需要堅持和勇氣。 \n而對於那些想要減肥的人來說,健身房也是非常重要的。有氧運動例如慢跑,跳操,游泳等都是非常有效的減肥方式。而且因為健身房的壓力比較大,所以很容易讓人保持運動的熱情和動力。 \n有些人則會選擇瑜珈館作為健身的場所,因為它們通常比較安靜,能夠讓人們放鬆。瑜珈有紓解壓力,提升柔軟度,強化肌肉的好處,而且更適合不喜歡高強度訓練的人。 \n總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。無論您是肌力訓練愛好者,減重追求者還是想要放鬆的人,您都可以在這裡找到適合自己的運動方式。 \nHuman:請從上文摘取三個句子進行評分。 \nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三個句子:\n1. \"在健身房,人們可以透過器材訓練,有氧運動和瑜珈等方式來達到健身的目的。 \"\n2. \"而對於想要減肥的人來說,健身房也是非常重要的。 \"\n3. \"總的來說,健身房和瑜珈館是現代社會不可或缺的健身場所。 \"\n網頁端評分介面還沒準備好,不能在這裡為你進行打分,請見諒。"
}
```
### Schema
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. | 3,757 | [
[
-0.04071044921875,
-0.05145263671875,
0.0197906494140625,
0.0289154052734375,
-0.036651611328125,
-0.0167236328125,
-0.00717926025390625,
-0.027557373046875,
0.03277587890625,
0.03631591796875,
-0.059478759765625,
-0.04058837890625,
-0.04486083984375,
0.0068... |
chats-bug/email_subject_gen | 2023-10-05T11:52:14.000Z | [
"region:us"
] | chats-bug | null | null | 0 | 16 | 2023-09-28T06:56:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: subject_line
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33264969.9304227
num_examples: 59489
- name: test
num_bytes: 1751347.0695772984
num_examples: 3132
download_size: 10335744
dataset_size: 35016317.0
---
# Dataset Card for "email_subject_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 649 | [
[
-0.0260467529296875,
-0.0187835693359375,
0.010467529296875,
0.01561737060546875,
-0.006847381591796875,
-0.002471923828125,
0.00917816162109375,
0.00789642333984375,
0.051513671875,
0.044830322265625,
-0.07684326171875,
-0.055877685546875,
-0.05096435546875,
... |
ashiyakatuka11/corpusGen_dataset | 2023-10-03T12:01:25.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 16 | 2023-09-28T10:08:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: ' Session ID '
dtype: int64
- name: ' Speaker '
dtype: string
- name: ' Utterance_clean'
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: Utterance
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 251021
num_examples: 1017
- name: test
num_bytes: 64519
num_examples: 255
download_size: 143048
dataset_size: 315540
---
# Dataset Card for "corpusGen_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 861 | [
[
-0.0462646484375,
-0.014923095703125,
0.01092529296875,
0.021820068359375,
-0.0123291015625,
0.01241302490234375,
-0.00452423095703125,
-0.01396942138671875,
0.059783935546875,
0.0311126708984375,
-0.04278564453125,
-0.0723876953125,
-0.051025390625,
-0.0102... |
Rodr16020/code_instructions_7_5k_alpaca_spanish | 2023-10-30T20:53:47.000Z | [
"region:us"
] | Rodr16020 | null | null | 0 | 16 | 2023-09-28T19:05:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_text
dtype: string
- name: llama2_chat_inst
dtype: string
splits:
- name: train
num_bytes: 15796815
num_examples: 7500
download_size: 7459672
dataset_size: 15796815
---
# Dataset Card for "code_instructions_7_5k_alpaca_spanish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.04144287109375,
-0.024505615234375,
0.00634765625,
0.052093505859375,
-0.03094482421875,
-0.009857177734375,
0.014739990234375,
-0.0177001953125,
0.055206298828125,
0.045501708984375,
-0.056915283203125,
-0.06982421875,
-0.044036865234375,
0.0111923217773... |
odunola/foodie-large-context | 2023-09-29T09:45:45.000Z | [
"region:us"
] | odunola | null | null | 0 | 16 | 2023-09-29T09:45:44 | ---
dataset_info:
features:
- name: texts
dtype: string
splits:
- name: train
num_bytes: 12575909
num_examples: 2105
download_size: 5056309
dataset_size: 12575909
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "odunola"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 438 | [
[
-0.041168212890625,
-0.0307464599609375,
0.013763427734375,
0.00827789306640625,
-0.00344085693359375,
-0.0170135498046875,
0.01357269287109375,
-0.0328369140625,
0.06646728515625,
0.041351318359375,
-0.0482177734375,
-0.044586181640625,
-0.04901123046875,
-... |
arbml/alpagasus_cleaned_ar_reviewed_v2 | 2023-11-03T00:57:20.000Z | [
"region:us"
] | arbml | null | null | 0 | 16 | 2023-10-01T12:04:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output_en
dtype: string
- name: input_en
dtype: string
- name: input
dtype: string
- name: instruction_en
dtype: string
- name: Reviewed by
dtype: string
- name: index
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 6665937
num_examples: 6341
download_size: 0
dataset_size: 6665937
---
# Dataset Card for "alpagasus_cleaned_ar_reviewed_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 722 | [
[
-0.034912109375,
-0.01312255859375,
0.005954742431640625,
-0.0057525634765625,
-0.03143310546875,
-0.01262664794921875,
0.029388427734375,
-0.0254364013671875,
0.06268310546875,
0.056060791015625,
-0.044097900390625,
-0.044281005859375,
-0.04034423828125,
-0... |
shossain/govreport-qa-16384 | 2023-10-02T05:41:29.000Z | [
"region:us"
] | shossain | null | null | 0 | 16 | 2023-10-02T05:40:53 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1541722952
num_examples: 7238
download_size: 215326747
dataset_size: 1541722952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 542 | [
[
-0.03302001953125,
0.0007119178771972656,
0.0305328369140625,
0.0159454345703125,
-0.022918701171875,
-0.0029754638671875,
0.033905029296875,
-0.006732940673828125,
0.06085205078125,
0.03411865234375,
-0.040252685546875,
-0.0494384765625,
-0.0289154052734375,
... |
ashiyakatuka11/en_gen_combo_dataset | 2023-10-03T12:19:18.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 16 | 2023-10-03T12:19:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: float64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: ' Session ID '
dtype: float64
- name: ' Speaker '
dtype: string
- name: ' Utterance_clean'
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1187238
num_examples: 5981
- name: test
num_bytes: 299548
num_examples: 1496
download_size: 565409
dataset_size: 1486786
---
# Dataset Card for "en_gen_combo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,067 | [
[
-0.054473876953125,
-0.00850677490234375,
0.0006985664367675781,
0.01311492919921875,
-0.024749755859375,
0.0267181396484375,
0.0180511474609375,
-0.005481719970703125,
0.07244873046875,
0.041259765625,
-0.056732177734375,
-0.03753662109375,
-0.03057861328125,
... |
ashiyakatuka11/es_gen_combo_dataset | 2023-10-03T12:19:20.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 16 | 2023-10-03T12:19:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Corpus Utterance #'
dtype: float64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Utterance
dtype: string
- name: TAG
dtype: string
- name: Session ID
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: ' Session ID '
dtype: float64
- name: ' Speaker '
dtype: string
- name: ' Utterance_clean'
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 650292
num_examples: 3737
- name: test
num_bytes: 164024
num_examples: 936
download_size: 305120
dataset_size: 814316
---
# Dataset Card for "es_gen_combo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,072 | [
[
-0.051239013671875,
-0.01244354248046875,
0.007411956787109375,
0.006427764892578125,
-0.0196075439453125,
0.02813720703125,
0.01959228515625,
-0.0024623870849609375,
0.07452392578125,
0.042510986328125,
-0.061767578125,
-0.043975830078125,
-0.03369140625,
-... |
ashiyakatuka11/all_combo_dataset | 2023-10-03T12:19:23.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 16 | 2023-10-03T12:19:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: float64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: 'Corpus Utterance #'
dtype: float64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Session ID
dtype: string
- name: ' Session ID '
dtype: float64
- name: ' Speaker '
dtype: string
- name: ' Utterance_clean'
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1775341
num_examples: 8701
- name: test
num_bytes: 446298
num_examples: 2177
download_size: 746207
dataset_size: 2221639
---
# Dataset Card for "all_combo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,269 | [
[
-0.04095458984375,
-0.0098114013671875,
0.0106048583984375,
0.009979248046875,
-0.0269012451171875,
0.0175628662109375,
0.027252197265625,
0.004299163818359375,
0.0826416015625,
0.046295166015625,
-0.049896240234375,
-0.04376220703125,
-0.039276123046875,
-0... |
TrainingDataPro/people-with-guns-segmentation-and-detection | 2023-10-12T07:07:40.000Z | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"legal",
"region:us"
] | TrainingDataPro | The dataset consists of photos depicting **individuals holding guns**. It specifically
focuses on the **segmentation** of guns within these images and the **detection** of
people holding guns.
Each image in the dataset presents a different scenario, capturing individuals from
various *backgrounds, genders, and age groups in different poses* while holding guns.
The dataset is an essential resource for the development and evaluation of computer
vision models and algorithms in fields related to *firearms recognition, security
systems, law enforcement, and safety analysis*. | @InProceedings{huggingface:dataset,
title = {people-with-guns-segmentation-and-detection},
author = {TrainingDataPro},
year = {2023}
} | 1 | 16 | 2023-10-03T14:47:31 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
- object-detection
tags:
- code
- finance
- legal
dataset_info:
config_name: people-with-guns-segmentation-and-detection
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: shapes
sequence:
- name: label
dtype:
class_label:
names:
'0': person
'1': gun
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: occluded
dtype: uint8
- name: z_order
dtype: int16
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42149
num_examples: 11
download_size: 69561417
dataset_size: 42149
---
# People with Guns Segmentation & Detection Dataset
The dataset consists of photos depicting **individuals holding guns**. It specifically focuses on the **segmentation** of guns within these images and the **detection** of people holding guns.
Each image in the dataset presents a different scenario, capturing individuals from various *backgrounds, genders, and age groups in different poses* while holding guns.
The dataset is an essential resource for the development and evaluation of computer vision models and algorithms in fields related to *firearms recognition, security systems, law enforcement, and safety analysis*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-with-guns-segmentation-and-detection) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images with people holding guns
- **labels** - includes visualized labeling created for the original images
- **annotations.xml** - contains coordinates of the polygons and bounding boxes, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes and polygons. For each point, the x and y coordinates are provided.
### Сlasses:
- **person**: person, who holds the gun, detected with a bounding box,
- **gun**: gun, labeled with a polygon
# Example of XML file structure

# People with Guns Segmentation & Detection might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-with-guns-segmentation-and-detection)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** | 3,491 | [
[
-0.04840087890625,
-0.0193939208984375,
0.0330810546875,
-0.016998291015625,
-0.032135009765625,
0.019805908203125,
0.006500244140625,
-0.030853271484375,
0.0038051605224609375,
0.0439453125,
-0.03643798828125,
-0.080078125,
-0.0667724609375,
-0.007743835449... |
cadaeic/2000-sample-synthetic-recipe-dataset | 2023-10-04T22:44:10.000Z | [
"language:en",
"region:us"
] | cadaeic | null | null | 0 | 16 | 2023-10-04T09:44:05 | ---
language:
- en
---
Dataset pairing GPT-4 synthesized instructions with outputs from [RecipeNLG](https://www.kaggle.com/datasets/paultimothymooney/recipenlg) in Axolotl's "alpaca" jsonl format | 195 | [
[
-0.016937255859375,
-0.0498046875,
0.037445068359375,
0.0240936279296875,
-0.0263671875,
-0.016265869140625,
0.00205230712890625,
-0.0034847259521484375,
0.046600341796875,
0.060150146484375,
-0.0772705078125,
-0.065673828125,
-0.011566162109375,
0.018768310... |
adamo1139/PS_AD_Office365_03 | 2023-10-05T00:20:42.000Z | [
"region:us"
] | adamo1139 | null | null | 0 | 16 | 2023-10-05T00:18:51 | Previous version with a subset of spicyboros 2.2 coding samples plus some a few other new PowerShell scripting samples. Some formatting fixes. | 142 | [
[
-0.002460479736328125,
-0.0225372314453125,
0.0195770263671875,
0.01329803466796875,
-0.01052093505859375,
0.0140228271484375,
-0.0244293212890625,
0.00579833984375,
0.048309326171875,
0.07696533203125,
-0.046905517578125,
-0.040283203125,
-0.0291290283203125,
... |
saumya1999/QA_Saumya | 2023-10-05T15:07:38.000Z | [
"region:us"
] | saumya1999 | null | null | 0 | 16 | 2023-10-05T15:07:14 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
cvinker/lit_test | 2023-10-10T13:19:56.000Z | [
"region:us"
] | cvinker | null | null | 0 | 16 | 2023-10-05T15:56:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: QUESTION
dtype: string
- name: ANSWER
dtype: string
splits:
- name: train
num_bytes: 15413372
num_examples: 14051
download_size: 5719059
dataset_size: 15413372
---
# Dataset Card for "lit_test"
Collection of questions, and answers about top novels from history as well as many direct excerpts from novels and textbooks.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.040283203125,
-0.0576171875,
0.01543426513671875,
-0.0081939697265625,
-0.001155853271484375,
-0.0085906982421875,
0.0091552734375,
0.01023101806640625,
0.033721923828125,
0.04962158203125,
-0.05230712890625,
-0.02984619140625,
-0.0021762847900390625,
-0.... |
napatswift/budget-seq2seq-json | 2023-10-05T16:34:27.000Z | [
"region:us"
] | napatswift | null | null | 0 | 16 | 2023-10-05T16:33:07 | ---
dataset_info:
features:
- name: line_item
sequence: string
- name: target
dtype: string
- name: input
dtype: string
- name: format
dtype: string
splits:
- name: train
num_bytes: 231359400.0
num_examples: 19075
download_size: 47272901
dataset_size: 231359400.0
---
# Dataset Card for "budget-seq2seq-json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.038787841796875,
-0.0090484619140625,
0.0164642333984375,
0.0206298828125,
-0.006702423095703125,
0.015655517578125,
0.027069091796875,
0.00563812255859375,
0.05657958984375,
0.05291748046875,
-0.058135986328125,
-0.02783203125,
-0.030487060546875,
-0.017... |
atulsinghphd/demo-new | 2023-10-05T20:03:41.000Z | [
"region:us"
] | atulsinghphd | null | null | 0 | 16 | 2023-10-05T20:03:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32616.0
num_examples: 172
- name: test
num_bytes: 8154.0
num_examples: 43
download_size: 12874
dataset_size: 40770.0
---
# Dataset Card for "demo-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.047088623046875,
-0.0163726806640625,
0.00794219970703125,
0.0076904296875,
-0.0202484130859375,
0.008148193359375,
0.01554107666015625,
-0.0031604766845703125,
0.068115234375,
0.026519775390625,
-0.0692138671875,
-0.05889892578125,
-0.033660888671875,
-0... |
Intuit-GenSRF/jigsaw-unintended-bias | 2023-10-05T23:16:04.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 16 | 2023-10-05T23:13:50 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 613511925
num_examples: 1999516
download_size: 417235573
dataset_size: 613511925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-unintended-bias"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.0389404296875,
-0.0226593017578125,
0.0185394287109375,
0.0167236328125,
-0.0263214111328125,
-0.00665283203125,
0.0249481201171875,
-0.00669097900390625,
0.06549072265625,
0.033172607421875,
-0.058319091796875,
-0.0372314453125,
-0.048553466796875,
-0.03... |
Trelis/stanford-NIL-disclosure-sft | 2023-10-06T09:17:36.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"NIL",
"region:us"
] | Trelis | null | null | 0 | 16 | 2023-10-06T09:16:13 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- NIL
size_categories:
- n<1K
---
# NIL Policy
Data is taken from the [Stanford website](https://gostanford.com/sports/2022/11/11/nil-student-athletes.aspx).
The maximum number of tokens (prompt + completion) in a row of data/train.csv is 100
The maximum number of tokens (prompt + completion) in a row of data/test.csv is 89
For educational and non-commercial use only. | 447 | [
[
-0.0269775390625,
-0.0330810546875,
0.02008056640625,
0.041473388671875,
-0.003353118896484375,
0.00896453857421875,
-0.0032367706298828125,
-0.01029205322265625,
0.0182342529296875,
0.033050537109375,
-0.0535888671875,
-0.031158447265625,
-0.01361846923828125,
... |
BounharAbdelaziz/English-to-Moroccan-Darija | 2023-10-07T23:51:00.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ar",
"region:us"
] | BounharAbdelaziz | null | null | 2 | 16 | 2023-10-07T23:48:46 | ---
dataset_info:
features:
- name: english
dtype: string
- name: darija
dtype: string
splits:
- name: train
num_bytes: 636610
num_examples: 10062
download_size: 447249
dataset_size: 636610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
language:
- ar
size_categories:
- 10K<n<100K
---
# Dataset Card for "English-to-Moroccan-Darija"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.047454833984375,
-0.01288604736328125,
0.004543304443359375,
0.043365478515625,
-0.0191497802734375,
-0.001773834228515625,
0.0009531974792480469,
-0.017974853515625,
0.06707763671875,
0.044708251953125,
-0.05322265625,
-0.07672119140625,
-0.060333251953125,
... |
truebrown22x/try | 2023-10-09T09:33:50.000Z | [
"region:us"
] | truebrown22x | null | null | 0 | 16 | 2023-10-09T08:04:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FinGPT/fingpt-convfinqa | 2023-10-10T06:44:37.000Z | [
"region:us"
] | FinGPT | null | null | 0 | 16 | 2023-10-10T06:37:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 52762154
num_examples: 11104
- name: test
num_bytes: 6733552
num_examples: 1490
download_size: 10979923
dataset_size: 59495706
---
# Dataset Card for "fingpt-convfinqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 623 | [
[
-0.059783935546875,
-0.02166748046875,
0.0067901611328125,
0.005985260009765625,
-0.0282745361328125,
0.0005230903625488281,
0.0179901123046875,
-0.004852294921875,
0.042633056640625,
0.0399169921875,
-0.058380126953125,
-0.04901123046875,
-0.040313720703125,
... |
TrainingDataPro/generated-passports-segmentation | 2023-10-31T14:09:11.000Z | [
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | null | null | 2 | 16 | 2023-10-10T10:21:17 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
tags:
- finance
- legal
- code
dataset_info:
features:
- name: id
dtype: uint16
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: shapes
sequence:
- name: type
sequence: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: int32
- name: occluded
dtype: int32
- name: z_order
dtype: int32
- name: attributes
sequence:
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 174234798.0
num_examples: 22
download_size: 169788746
dataset_size: 174234798.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# GENERATED USA Passports Segmentation
The dataset contains a collection of images representing **GENERATED USA Passports**. Each passport image is segmented into different zones, including the **passport zone, photo, name, surname, date of birth, sex, nationality, passport number, and MRZ (Machine Readable Zone)**.
The dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.
Generated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. **Synthetic data generation** allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.

### The dataset is solely for informational or educational purposes and should not be used for any fraudulent or deceptive activities.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-passports-segmentation) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of generated images of passports
- **labels** - includes segmentation masks created for the original images
- **annotations.xml** - contains coordinates of the polygons, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the polygons and labels . For each point, the x and y coordinates are provided.
### Сlasses:
- **passport**: passport zone,
- **photo**: photo of the person,
- **number**: number of the passport,
- **name**: name of the person,
- **surname**: surname of the person,
- **date_of_birth**: date of birth of the person,
- **nationality**: nationality of the person,
- **sex**: sex of the person,
- **mrz**: mrz in the passport,
- **other**: other text in the passport
# Example of XML file structure

# GENERATED USA Passports Segmentation might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-passports-segmentation)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 3,897 | [
[
-0.043731689453125,
-0.0318603515625,
0.035614013671875,
0.0002963542938232422,
-0.03729248046875,
0.0179290771484375,
0.0112762451171875,
-0.0312347412109375,
0.0167694091796875,
0.058929443359375,
-0.054718017578125,
-0.0775146484375,
-0.0193939208984375,
... |
datastax/philosopher-quotes | 2023-10-11T07:55:38.000Z | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:cc",
"code",
"region:us"
] | datastax | null | null | 0 | 16 | 2023-10-10T12:02:19 | ---
license: cc
task_categories:
- conversational
language:
- en
tags:
- code
pretty_name: Philosophers Quotes
size_categories:
- n<1K
---
450 quotes by 9 philosophers (50 quotes each), labeled with the author and with a variable number of topic tags.
The quotes originally come from https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers (CC BY-NC-SA 4.0).
The text of each quote has been cleaned of soft-hyphens (`\xad`) and other weird characters.
The topic labeling has been done with a default HuggingFace zero-shot classifier pipeline with multi_labels. | 574 | [
[
-0.0614013671875,
-0.038970947265625,
0.04339599609375,
-0.009002685546875,
-0.035430908203125,
-0.00176239013671875,
-0.00506591796875,
-0.0176239013671875,
0.0240478515625,
0.06964111328125,
-0.06231689453125,
-0.0122528076171875,
-0.042327880859375,
0.012... |
nadsoft/Jordan-Audio | 2023-10-11T08:20:58.000Z | [
"region:us"
] | nadsoft | null | null | 0 | 16 | 2023-10-11T08:17:45 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 669684377.68
num_examples: 5044
download_size: 660360475
dataset_size: 669684377.68
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jo_aud"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.041351318359375,
-0.039886474609375,
0.00629425048828125,
0.0130615234375,
-0.01165008544921875,
-0.0076141357421875,
0.018585205078125,
-0.01280975341796875,
0.0672607421875,
0.05010986328125,
-0.054168701171875,
-0.0562744140625,
-0.044281005859375,
-0.... |
cxllin/medinstruct | 2023-10-13T16:36:08.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"arxiv:2009.13081",
"region:us"
] | cxllin | null | null | 2 | 16 | 2023-10-12T00:44:06 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
---
### Dataset Sources
- **Repository:** [https://github.com/jind11/MedQA]
- **Paper :** [https://arxiv.org/abs/2009.13081]
## Citation
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
} | 584 | [
[
-0.01107025146484375,
-0.05316162109375,
0.044158935546875,
-0.0211639404296875,
0.00492095947265625,
-0.02056884765625,
0.004169464111328125,
-0.00766754150390625,
0.027984619140625,
0.039947509765625,
-0.0406494140625,
-0.042266845703125,
-0.02618408203125,
... |
hippocrates/MedNLI_train | 2023-10-18T19:47:44.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 16 | 2023-10-12T15:46:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8375998
num_examples: 11232
- name: valid
num_bytes: 1054726
num_examples: 1395
- name: test
num_bytes: 1050034
num_examples: 1422
download_size: 3057999
dataset_size: 10480758
---
# Dataset Card for "MedNLI_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 619 | [
[
-0.035064697265625,
0.0038013458251953125,
0.01256561279296875,
0.00531005859375,
-0.005947113037109375,
-0.01025390625,
0.01373291015625,
-0.01055908203125,
0.05950927734375,
0.0284881591796875,
-0.0689697265625,
-0.038604736328125,
-0.03179931640625,
-0.01... |
surathisin/dataset-llama-test-1 | 2023-10-13T06:56:32.000Z | [
"region:us"
] | surathisin | null | null | 0 | 16 | 2023-10-13T02:04:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pvduy/synth_code_preference_4k | 2023-10-13T11:17:21.000Z | [
"region:us"
] | pvduy | null | null | 0 | 16 | 2023-10-13T11:17:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 14409668
num_examples: 4052
download_size: 3223970
dataset_size: 14409668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "synth_code_preference_4k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 530 | [
[
-0.053741455078125,
0.002178192138671875,
0.0250091552734375,
0.021575927734375,
-0.009765625,
0.002712249755859375,
0.0101318359375,
-0.0034580230712890625,
0.059539794921875,
0.03662109375,
-0.050048828125,
-0.052490234375,
-0.018585205078125,
0.0064010620... |
ahhany/engd_researches | 2023-10-15T08:09:04.000Z | [
"region:us"
] | ahhany | null | null | 0 | 16 | 2023-10-15T08:07:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/squad_title_v4_train_30_eval_10_deduped | 2023-10-17T06:06:35.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 16 | 2023-10-17T06:06:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 300178.52173913043
num_examples: 199
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 98978
dataset_size: 350985.52173913043
---
# Dataset Card for "squad_title_v4_train_30_eval_10_deduped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 791 | [
[
-0.02923583984375,
-0.00666046142578125,
0.01149749755859375,
0.034515380859375,
-0.00109100341796875,
0.02801513671875,
0.0229949951171875,
0.005828857421875,
0.03765869140625,
0.0299530029296875,
-0.074951171875,
-0.047393798828125,
-0.035186767578125,
0.0... |
artivus/rebel-sharegpt | 2023-10-28T10:36:30.000Z | [
"license:apache-2.0",
"region:us"
] | artivus | null | null | 0 | 16 | 2023-10-17T06:31:46 | ---
license: apache-2.0
---
MRebel Dataset adapted to sharegpt
| 65 | [
[
-0.049560546875,
-0.002246856689453125,
0.00251007080078125,
0.044464111328125,
0.0013294219970703125,
-0.0194244384765625,
-0.02276611328125,
-0.0241241455078125,
0.01605224609375,
0.0184173583984375,
-0.046844482421875,
-0.03497314453125,
-0.05462646484375,
... |
mnoukhov/openai_summarize_comparisons_relabel_pythia410m | 2023-10-17T20:10:34.000Z | [
"region:us"
] | mnoukhov | null | null | 0 | 16 | 2023-10-17T20:10:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 157425966
num_examples: 92534
- name: test
num_bytes: 8367345
num_examples: 5000
download_size: 21757616
dataset_size: 165793311
---
# Dataset Card for "openai_summarize_comparisons_relabel_pythia410m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 654 | [
[
-0.0311126708984375,
-0.0023021697998046875,
0.0007495880126953125,
0.0099334716796875,
-0.016998291015625,
-0.00986480712890625,
0.0036640167236328125,
-0.0030536651611328125,
0.060455322265625,
0.0262451171875,
-0.034576416015625,
-0.055389404296875,
-0.041778... |
imdatta0/openbqa_sciq | 2023-10-18T09:42:51.000Z | [
"region:us"
] | imdatta0 | null | null | 0 | 16 | 2023-10-18T09:42:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
skrishna/challenging_toxic_samples | 2023-10-19T13:42:15.000Z | [
"region:us"
] | skrishna | null | null | 0 | 16 | 2023-10-19T13:41:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
coastalcph/fm_templates | 2023-10-24T07:03:22.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 16 | 2023-10-20T07:38:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fashxp/pimcore-docs-embeddings | 2023-10-20T12:19:11.000Z | [
"region:us"
] | fashxp | null | null | 0 | 16 | 2023-10-20T12:15:18 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: heading
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 11771586
num_examples: 3100
download_size: 12430683
dataset_size: 11771586
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pimcore-docs-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 565 | [
[
-0.03997802734375,
-0.0211334228515625,
0.0194091796875,
0.004970550537109375,
-0.01885986328125,
-0.0004398822784423828,
0.00402069091796875,
0.0176239013671875,
0.054779052734375,
0.023834228515625,
-0.045867919921875,
-0.060302734375,
-0.059326171875,
-0.... |
nlplabtdtu/xquad_benchmark | 2023-10-20T16:36:47.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 16 | 2023-10-20T16:35:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
diwank/expertllama-chatml | 2023-10-22T05:31:03.000Z | [
"region:us"
] | diwank | null | null | 0 | 16 | 2023-10-22T05:30:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: chatml
list:
- name: content
dtype: string
- name: name
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 100609582
num_examples: 52002
download_size: 50005152
dataset_size: 100609582
---
# Dataset Card for "expertllama-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.0419921875,
-0.03338623046875,
0.01036834716796875,
0.0118255615234375,
0.0016317367553710938,
0.01166534423828125,
-0.006076812744140625,
-0.0126800537109375,
0.062469482421875,
0.033111572265625,
-0.05352783203125,
-0.0626220703125,
-0.046905517578125,
... |
zelros/pj-groupama | 2023-11-02T21:51:11.000Z | [
"region:us"
] | zelros | null | null | 0 | 16 | 2023-10-23T20:13:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ginger-turmeric/LibriSpeech | 2023-10-24T07:56:03.000Z | [
"region:us"
] | ginger-turmeric | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | 0 | 16 | 2023-10-23T20:47:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nlplabtdtu/closed-QA-vi | 2023-10-24T08:54:13.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 16 | 2023-10-24T05:08:44 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: hint
dtype: string
- name: ok
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 11217015
num_examples: 6380
download_size: 5360083
dataset_size: 11217015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "closed-QA-vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 655 | [
[
-0.0279541015625,
-0.016571044921875,
0.02655029296875,
0.0025196075439453125,
-0.019073486328125,
-0.0048370361328125,
0.01375579833984375,
0.006664276123046875,
0.05450439453125,
0.039794921875,
-0.06402587890625,
-0.06097412109375,
-0.003978729248046875,
... |
lilpotato03/sample_medchatbot_llm2 | 2023-10-24T10:09:03.000Z | [
"region:us"
] | lilpotato03 | null | null | 0 | 16 | 2023-10-24T07:36:53 |
{"inputs": "###Instruction: Managing stress through creativity###Response: When you're feeling stressed or on the edge of anxiety, consider turning to creative outlets like drawing, painting, or even adult coloring books. It can provide a sense of relief and help you focus your thoughts."},
{"inputs": "###Instruction: Coping with post-breakup solitude###Response: After a breakup, loneliness can be overwhelming. If you have free evenings, try finding activities that keep you engaged. This might include exploring new hobbies or spending more time with friends."},
{"inputs": "###Instruction: Discovering personal purpose in life###Response: Finding your life's purpose can be challenging. Instead of dwelling on the idea that life is without meaning, take small steps towards self-discovery. Set goals, learn new things, and explore various interests."},
{"inputs": "###Instruction: Navigating fear and anxiety as a person of color###Response: Fear and anxiety in the face of adversity can be daunting. Your experiences as a person of color are valuable, and it's essential to share them with supportive communities or seek guidance from those who understand your perspective."},
{"inputs": "###Instruction: Dealing with uncertainty and insecurity###Response: When you're uncertain about your path in life and struggling with self-doubt, it's important to remember that many people go through similar phases. Explore different interests, take small steps, and don't hesitate to seek guidance and support."},
{"inputs": "###Instruction: Finding meaning in life's challenges###Response: Life's challenges can make it difficult to see a clear purpose. However, they also offer opportunities for growth. Embrace difficulties as chances to learn and discover your strengths. Your journey is unique, and there is meaning in every experience."},
{"inputs": "###Instruction: Coping with the unknown###Response: When you're faced with the unknown, it can be unsettling. Remember that uncertainty is a part of life, and it's a chance to explore and grow. Take one step at a time, seek advice, and maintain a positive outlook."},
{"inputs": "###Instruction: Enhancing mental well-being through mindfulness###Response: Incorporating mindfulness practices into your daily routine can help improve your mental well-being. Techniques like meditation, deep breathing, and focusing on the present moment can reduce stress and increase self-awareness."},
{"inputs": "###Instruction: Coping with anxiety in social situations###Response: Social anxiety can be challenging, but it's possible to overcome it. Gradual exposure to social situations, self-compassion, and cognitive-behavioral therapy can be effective tools for managing anxiety in social settings."},
{"inputs": "###Instruction: Building resilience in the face of adversity###Response: Resilience is a valuable trait for mental health. You can enhance it by developing problem-solving skills, seeking support from friends and family, and maintaining a positive outlook even in challenging times."},
{"inputs": "###Instruction: Nurturing self-esteem and self-acceptance###Response: Developing a healthy self-esteem is crucial for mental well-being. Focus on self-acceptance, practice self-compassion, and challenge negative self-talk to build a more positive self-image."},
{"inputs": "###Instruction: Coping with depression and seeking help###Response: If you're dealing with depression, remember that seeking professional help is a critical step. Therapists, counselors, and support groups can provide the assistance and guidance needed to overcome depression."},
{"inputs": "###Instruction: Building strong relationships and emotional connections###Response: Positive relationships are essential for mental health. Invest in building strong connections with friends and loved ones by actively listening, showing empathy, and communicating openly."},
{"inputs": "###Instruction: Handling stress at work and maintaining work-life balance###Response: Balancing work and personal life is vital for mental well-being. Set boundaries, prioritize self-care, and practice time management to reduce work-related stress."},
{"inputs": "###Instruction: Recognizing signs of suicidal thoughts###Response: It's crucial to be aware of the signs of suicidal thoughts in yourself or others. These signs may include talking about death or dying, expressing hopelessness, or giving away belongings. If you notice these signs, seek help immediately."},
{"inputs": "###Instruction: Providing support to someone in crisis###Response: If someone you know is in crisis, it's essential to offer your support. Listen without judgment, encourage them to talk about their feelings, and help them connect with professional resources, such as a crisis helpline or a therapist."},
{"inputs": "###Instruction: Promoting open conversations about mental health###Response: Open and honest conversations about mental health can reduce the stigma surrounding it. Encourage dialogue with friends and loved ones about their emotional well-being. It's okay not to be okay, and seeking help is a sign of strength."},
{"inputs": "###Instruction: Preventing access to lethal means###Response: If you or someone you know is at risk of suicide, it's crucial to restrict access to lethal means such as firearms, medications, or sharp objects. This can save lives by creating a safer environment."},
{"inputs": "###Instruction: Encouraging professional help for suicidal thoughts###Response: Suicidal thoughts are a serious concern that requires professional intervention. Encourage anyone struggling with these thoughts to seek help from a therapist, counselor, or a crisis hotline. You are not alone in this journey."},
{"inputs": "###Instruction: Supporting mental health initiatives###Response: Advocate for mental health initiatives and organizations that raise awareness and provide resources for those at risk of suicide. Your support can make a significant difference in saving lives."},
{"inputs": "###Instruction: Fostering hope and resilience###Response: In times of despair, it's crucial to foster hope and resilience. Connect with supportive communities, engage in self-care, and remember that healing is possible. Your life is valuable, and there is hope for a better future."},
{"inputs": "###Instruction: Recognizing the impact of bullying on mental health###Response: Bullying can have a severe impact on mental health and may contribute to suicidal thoughts. Take a stand against bullying, offer support to those affected, and create a safe environment for everyone."},
{"inputs": "###Instruction: Seeking help for self-harm and suicidal tendencies###Response: If you're struggling with self-harm or suicidal tendencies, please seek immediate professional help. Therapists, counselors, and crisis hotlines are available to provide the support and guidance you need."},
{"inputs": "###Instruction: Promoting self-care and emotional well-being###Response: Prioritize self-care and emotional well-being. Engage in activities that bring you joy, connect with positive influences, and remember that seeking help is a sign of strength. Your mental health matters."},
{"inputs": "###Instruction: Overcoming depression through self-care###Response: Self-care plays a significant role in overcoming depression. Prioritize activities that bring you joy, practice self-compassion, and engage in regular exercise to boost your mood and mental well-being."},
{"inputs": "###Instruction: Recognizing the signs of depression in yourself###Response: Being aware of the signs of depression in yourself is essential. These signs may include persistent sadness, loss of interest in activities, changes in appetite, and trouble sleeping. Seeking help early can make a difference."},
{"inputs": "###Instruction: Providing support to a loved one with depression###Response: If someone you care about is dealing with depression, offer your support and understanding. Listen without judgment, encourage them to seek professional help, and be patient as they navigate their journey to recovery."},
{"inputs": "###Instruction: Reducing the stigma around mental health###Response: It's important to reduce the stigma surrounding mental health. Open conversations about depression and mental well-being can create a more supportive environment for those who are struggling."},
{"inputs": "###Instruction: Coping with depression and isolation###Response: Coping with depression and isolation can be challenging. Connect with friends and loved ones, participate in group activities, and seek professional therapy to combat feelings of loneliness and despair."},
{"inputs": "###Instruction: Setting achievable goals for managing depression###Response: Setting achievable goals is a practical way to manage depression. Start with small steps, celebrate your successes, and gradually work your way toward larger objectives. This can provide a sense of accomplishment and hope."},
{"inputs": "###Instruction: Strategies for improving sleep and managing depression###Response: Adequate sleep is crucial for managing depression. Create a bedtime routine, limit screen time before bed, and consider relaxation techniques to improve your sleep quality and overall mental health."},
{"inputs": "###Instruction: Seeking professional help for depression###Response: Depression is a serious condition that often requires professional intervention. Don't hesitate to reach out to therapists, counselors, or support groups to get the assistance and guidance you need for recovery."},
{"inputs": "###Instruction: Overcoming loneliness through self-connection###Response: Overcoming loneliness often starts with connecting with yourself. Engage in self-reflection, discover your interests, and invest time in self-care. Building a strong connection with yourself can help alleviate feelings of loneliness."},
{"inputs": "###Instruction: Cultivating social connections to combat loneliness###Response: Cultivating social connections is essential to combat loneliness. Seek opportunities to meet new people, join clubs or communities aligned with your interests, and engage in social activities to build meaningful relationships."},
{"inputs": "###Instruction: Coping with loneliness in old age###Response: Loneliness can be particularly challenging in old age. It's essential to stay socially active, reach out to family and friends, and consider volunteering or participating in senior programs to combat loneliness."},
{"inputs": "###Instruction: Using technology to reduce loneliness###Response: Technology can be a valuable tool to reduce loneliness. Stay connected with loved ones through video calls and social media, join online communities related to your interests, and explore virtual events to stay engaged and combat loneliness."},
{"inputs": "###Instruction: Managing loneliness during the holidays###Response: Loneliness during the holidays can be especially challenging. Reach out to friends and family, create your own holiday traditions, and consider volunteering or attending local events to make the season more enjoyable."},
{"inputs": "###Instruction: Seeking professional help for loneliness###Response: If loneliness is overwhelming and persistent, seeking professional help is a valid option. Therapists and counselors can provide strategies and support to address loneliness and improve your mental well-being."},
{"inputs": "###Instruction: Finding meaning and purpose to combat loneliness###Response: Finding meaning and purpose in your life can help combat loneliness. Pursue activities that align with your values, set goals, and seek personal growth to feel more fulfilled and connected to the world around you."},
{"inputs": "###Instruction: Fostering a pet to alleviate loneliness###Response: Fostering a pet can be a rewarding way to alleviate loneliness. Pets offer companionship and can help reduce feelings of isolation. Consider adopting a furry friend to improve your emotional well-being."},
{"inputs": "###Instruction: Managing anxiety through relaxation techniques###Response: Managing anxiety often involves relaxation techniques. Consider practices like deep breathing, meditation, or progressive muscle relaxation to calm your mind and reduce anxiety."},
{"inputs": "###Instruction: Overcoming social anxiety and building confidence###Response: Overcoming social anxiety requires building confidence. Gradual exposure to social situations, positive self-talk, and seeking support from friends or a therapist can help you become more socially comfortable."},
{"inputs": "###Instruction: Coping with test anxiety and improving performance###Response: Coping with test anxiety is essential for better performance. Effective study strategies, time management, and relaxation techniques can help reduce anxiety and enhance your test-taking abilities."},
{"inputs": "###Instruction: Reducing anxiety through physical activity###Response: Physical activity can help reduce anxiety. Regular exercise releases endorphins, which can improve your mood and alleviate anxiety. Incorporate exercise into your routine for better mental well-being."},
{"inputs": "###Instruction: Seeking therapy for anxiety and panic attacks###Response: If you experience frequent anxiety or panic attacks, seeking therapy is a valuable option. Cognitive-behavioral therapy and counseling can provide you with strategies to manage anxiety and regain control."},
{"inputs": "###Instruction: Managing work-related anxiety and achieving work-life balance###Response: Managing work-related anxiety is crucial for maintaining a work-life balance. Set clear boundaries, prioritize self-care, and practice stress-reduction techniques to reduce anxiety at work."},
{"inputs": "###Instruction: Overcoming driving anxiety and regaining independence###Response: Overcoming driving anxiety can help you regain independence. Gradual exposure to driving, relaxation techniques, and support from friends or a therapist can assist you in managing driving-related anxiety."},
{"inputs": "###Instruction: Using technology for anxiety management###Response: Technology can be a helpful tool for managing anxiety. Apps for mindfulness, stress reduction, and virtual therapy can assist you in tracking and alleviating anxiety symptoms."},
{"inputs": "###Instruction: Managing anger through self-awareness###Response: Managing anger begins with self-awareness. Recognizing your triggers and learning to control your reactions can help you respond to anger in a more constructive way."},
{"inputs": "###Instruction: Coping with anger in relationships###Response: Coping with anger in relationships is crucial. Effective communication, empathy, and conflict resolution skills can help you navigate and resolve anger-related issues with your loved ones."},
{"inputs": "###Instruction: Handling workplace anger and maintaining professionalism###Response: Handling workplace anger while maintaining professionalism is essential. Deep breathing, time management, and addressing issues through appropriate channels can help you manage workplace anger without compromising your job."},
{"inputs": "###Instruction: Overcoming road rage and promoting safe driving###Response: Overcoming road rage is important for safe driving. Implement strategies like practicing patience, listening to soothing music, and avoiding aggressive behavior on the road to ensure road safety."},
{"inputs": "###Instruction: Dealing with anger issues and seeking therapy###Response: If you're struggling with anger issues, seeking therapy can be beneficial. Anger management therapy and counseling can provide you with tools to control your anger and improve your mental well-being."},
{"inputs": "###Instruction: Nurturing emotional intelligence to manage anger###Response: Nurturing emotional intelligence is key to managing anger. Developing self-awareness, empathy, and effective emotional regulation can help you respond to anger in a healthier and more productive way."},
{"inputs": "###Instruction: Using anger as a motivator for positive change###Response: Anger can be channeled as a motivator for positive change. Instead of letting anger consume you, use it as a driving force to address issues, advocate for change, and make a difference in your life or the world."},
{"inputs": "###Instruction: Self-Harm Prevention Strategies###Response: Preventing self-harm is essential for your well-being. Reach out to a trusted friend or family member when you're struggling, create a self-care plan, and seek professional help if needed."},
{"inputs": "###Instruction: Coping with self-harm urges###Response: Coping with self-harm urges can be challenging. Distract yourself with a healthy activity, like drawing, journaling, or exercise, and remember that seeking support from a therapist or counselor is a positive step."},
{"inputs": "###Instruction: Building a support network for self-harm recovery###Response: Building a strong support network is vital for self-harm recovery. Open up to friends and family about your struggles, attend support groups, and connect with professionals who specialize in self-harm recovery."},
{"inputs": "###Instruction: Identifying triggers and warning signs###Response: Identifying triggers and warning signs is a crucial part of self-harm prevention. Keep a journal to track your emotions and situations that lead to self-harm urges, and work on strategies to avoid or cope with these triggers."},
{"inputs": "###Instruction: Seeking professional help for self-harm recovery###Response: If you're struggling with self-harm, seeking professional help is essential. Therapists, counselors, and mental health experts can provide you with the guidance and support needed to overcome self-harm behaviors."},
{"inputs": "###Instruction: Developing healthy coping mechanisms###Response: Developing healthy coping mechanisms is key to self-harm prevention. Explore alternative ways to manage stress and emotions, such as mindfulness, art therapy, or meditation, to replace self-harm behaviors."} | 18,063 | [
[
-0.040130615234375,
-0.06341552734375,
0.07196044921875,
0.048492431640625,
-0.00954437255859375,
0.010162353515625,
0.01387786865234375,
-0.0164794921875,
0.0292816162109375,
0.00896453857421875,
-0.0772705078125,
-0.034088134765625,
-0.060821533203125,
0.0... |
fia24/banel_wit_postag_v0.1.2.3.4 | 2023-10-25T05:58:57.000Z | [
"region:us"
] | fia24 | null | null | 0 | 16 | 2023-10-25T05:58:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: Inflected_Word
dtype: string
- name: Lemma
dtype: string
- name: POS
dtype: string
splits:
- name: train
num_bytes: 1237478.719008634
num_examples: 17882
- name: test
num_bytes: 154736.74173489018
num_examples: 2236
- name: val
num_bytes: 154667.53925647563
num_examples: 2235
download_size: 521864
dataset_size: 1546883.0
---
# Dataset Card for "banel_wit_postag_v0.1.2.3.4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 760 | [
[
-0.036773681640625,
-0.0177764892578125,
-0.0035877227783203125,
0.0180511474609375,
-0.026947021484375,
-0.01320648193359375,
0.02093505859375,
-0.027191162109375,
0.04766845703125,
0.019500732421875,
-0.0567626953125,
-0.053466796875,
-0.042327880859375,
-... |
kardosdrur/opensubtitles-no-da | 2023-10-26T07:09:53.000Z | [
"license:mit",
"region:us"
] | kardosdrur | null | null | 0 | 16 | 2023-10-25T10:46:28 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: link_id
dtype: string
- name: da
dtype: string
- name: 'no'
dtype: string
- name: overlap
dtype: float64
splits:
- name: train
num_bytes: 270499727.08648384
num_examples: 1772983
- name: test
num_bytes: 67624969.91351616
num_examples: 443246
download_size: 201396375
dataset_size: 338124697.0
---
# OpenSubtitles Danish-Norwegian
Aligned sentences with heuristic-based filters from OpenSubtitles in Danish and in Norwegian.
The source code for producing the dataset is included in the repository.
The dataset was created to aid training sentence transformers in the Danish Foundation Models project.
| 827 | [
[
-0.0287322998046875,
-0.0301666259765625,
0.0305328369140625,
0.0216827392578125,
-0.0325927734375,
-0.004459381103515625,
-0.020477294921875,
-0.014495849609375,
-0.0041351318359375,
0.0716552734375,
-0.045166015625,
-0.0399169921875,
-0.0218505859375,
0.01... |
Arham-Imran/cityscape_fine_fyp | 2023-10-27T21:00:22.000Z | [
"region:us"
] | Arham-Imran | null | null | 0 | 16 | 2023-10-25T18:36:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jasonshen8848/dup_short | 2023-10-26T03:38:04.000Z | [
"region:us"
] | jasonshen8848 | null | null | 0 | 16 | 2023-10-26T03:37:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wisenut-nlp-team/FiD_aihub_commonsense | 2023-10-30T05:47:45.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | 0 | 16 | 2023-10-27T04:35:23 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 939634163
num_examples: 90241
- name: validation
num_bytes: 104207636
num_examples: 10027
download_size: 614695228
dataset_size: 1043841799
---
# Dataset Card for "FiD_aihub_commonsense"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 629 | [
[
-0.050689697265625,
-0.035614013671875,
-0.0015974044799804688,
0.0032901763916015625,
-0.0167694091796875,
-0.009368896484375,
0.0248870849609375,
-0.00701904296875,
0.05474853515625,
0.031951904296875,
-0.0455322265625,
-0.0445556640625,
-0.037139892578125,
... |
automated-research-group/winogrande | 2023-10-27T10:09:09.000Z | [
"region:us"
] | automated-research-group | null | null | 0 | 16 | 2023-10-27T10:09:08 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: id
dtype: string
- name: question
dtype: string
splits:
- name: validation
num_bytes: 316688
num_examples: 1267
download_size: 122948
dataset_size: 316688
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "winogrande"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 520 | [
[
-0.03680419921875,
-0.003528594970703125,
0.0130157470703125,
0.01399993896484375,
-0.01611328125,
-0.001956939697265625,
0.0108795166015625,
-0.0203399658203125,
0.06146240234375,
0.0267791748046875,
-0.04766845703125,
-0.05047607421875,
-0.0516357421875,
-... |
Lollitor/MyPubChem10 | 2023-10-31T13:03:18.000Z | [
"region:us"
] | Lollitor | null | null | 0 | 16 | 2023-10-31T13:02:30 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1482327.0
num_examples: 9000
- name: validation
num_bytes: 164703.0
num_examples: 1000
download_size: 514907
dataset_size: 1647030.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "MyPubChem10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.054962158203125,
-0.01354217529296875,
0.01456451416015625,
0.0291748046875,
-0.0056304931640625,
-0.0011548995971679688,
0.0193634033203125,
-0.005207061767578125,
0.06390380859375,
0.03387451171875,
-0.05615234375,
-0.037628173828125,
-0.034515380859375,
... |
stsudharsan/veshti-controlnet-v4-canny | 2023-10-31T15:07:34.000Z | [
"region:us"
] | stsudharsan | null | null | 0 | 16 | 2023-10-31T15:07:26 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_img
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 29728534.0
num_examples: 143
download_size: 28847175
dataset_size: 29728534.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "veshti-controlnet-v4-canny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.02264404296875,
-0.00010573863983154297,
0.00780487060546875,
0.022430419921875,
-0.027862548828125,
0.006534576416015625,
0.023468017578125,
-0.014892578125,
0.07403564453125,
0.049407958984375,
-0.06060791015625,
-0.05352783203125,
-0.039825439453125,
-... |
minoosh/shEMO_speech | 2023-11-01T06:35:49.000Z | [
"region:us"
] | minoosh | null | null | 0 | 16 | 2023-11-01T06:34:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: emotion
dtype:
class_label:
names:
'0': A
'1': H
'2': N
'3': S
'4': W
'5': F
splits:
- name: train
num_bytes: 856321868.0
num_examples: 2400
- name: test
num_bytes: 100721512.0
num_examples: 300
- name: valid
num_bytes: 105982082.0
num_examples: 300
download_size: 1043899986
dataset_size: 1063025462.0
---
# Dataset Card for "shEMO_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 860 | [
[
-0.026580810546875,
-0.0196075439453125,
-0.004119873046875,
0.005390167236328125,
-0.01580810546875,
0.0027408599853515625,
-0.0091094970703125,
-0.007404327392578125,
0.03704833984375,
0.0287933349609375,
-0.057586669921875,
-0.05670166015625,
-0.0518798828125... |
sunghuncsa/testdataset | 2023-11-01T07:07:32.000Z | [
"region:us"
] | sunghuncsa | null | null | 0 | 16 | 2023-11-01T07:00:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Trelis/openassistant-falcon | 2023-11-01T08:46:17.000Z | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",... | Trelis | null | null | 0 | 16 | 2023-11-01T08:38:05 | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
- llama-2
size_categories:
- 1K<n<10k
pretty_name: Filtered OpenAssistant Conversations
---
# Chat Fine-tuning Dataset - OpenAssistant Falcon
This dataset allows for fine-tuning chat models using '\Human:' AND '\nAssistant:' to wrap user messages.
It still uses <|endoftext|> as EOS and BOS token, as per Falcon.
Sample
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '\nHuman:'
- replace instances of '### Assistant:' with '\nAssistant:'
- end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) | 9,768 | [
[
-0.0235595703125,
-0.0677490234375,
0.01033782958984375,
0.0071563720703125,
-0.0031681060791015625,
0.005466461181640625,
-0.00811004638671875,
-0.0220184326171875,
0.024444580078125,
0.0291290283203125,
-0.04876708984375,
-0.057769775390625,
-0.039581298828125... |
openclimatefix/goes-mrms | 2023-05-12T08:56:03.000Z | [
"region:us"
] | openclimatefix | \ | @InProceedings{noaa::goes-mrms,
title = {EUMETSAT SEVIRI RSS UK HRV},
author={EUMETSAT, with preparation by Open Climate Fix
},
year={2022}
} | 0 | 15 | 2022-03-02T23:29:22 | # Dataset Card for Goes-MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is a combination of GOES-16 data and MRMS radar precipitation data to roughly match the unreleased dataset used to train Google Research's MetNet. In the papers they used GOES-16 satellite imagery, MultiRadar/Multi-System (MRMS) instantaneous precipitation, hourly cumulative precipitation, and High Resolution Rapid Refresh NWP initializations as inputs to predict future MRMS precipitation rates. The precipitation rates were binned into 0.2mm/hr bins to make the output a classification task, and allow for the models to predict a probability distribution over the region of interest.
Additionally, the input image patches are much larger than the target image patches. For MetNet, the input images covered 512x512 km area, while the target was the center 64x64 km crop. For MetNet-2 the input covered 2048x2048 km with the target being the central 512x512 km.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
MetNet (January 2018-July 2019) (16 days training, 2 days validation, 2 days test)
MetNet-2 (July 2017-August 2020) (Non-overlapping time ranges with 12 hour black outs in between)
Full (July 2017-January 2022) (Train: 2017-2020. except for first of the month, Validation: first of the month July 2017-2020, Test: 2021-2022)
## Dataset Creation
### Curation Rationale
The original curation rationale was for forecasting precipitation rate in a probabilistic way. This dataset covers a different time period than in the original paper, going from July 2017 through December 2021. There is a split available to match the temporal coverage of the original MetNet paper, (Janurary 2018 to July 2019) or the MetNet-2 paper (July 2017 to August 2020).
### Source Data
#### Initial Data Collection and Normalization
From the MetNet paper: "For both MRMS and GOES we acquired data for the period January 2018 through July 2019. We split the data temporally into three non-overlapping data sets by repeatedly using approximately 16 days for training followed by two days for validation and two days for testing. From these temporal splits we randomly extracted 13,717 test and validation samples and kept increasing the training set size until we observed no over-fitting at 1.72 million training samples."
From the MetNet-2 paper: "The training data consists of 1,230,585 patches of size 2048 km x 2048 km at the input and targets of size 512 km x 512 km including all 360 (2 to 720 minutes) time slices. The training area covers a region of 7000x2500 kilometers. We sample target patches from the input context region minus an all around border of 512 km. The input context is padded for all regions outside of the 7000x2500 CONUS. The validation data used for developing the models consists of 11,991 patches and the test data of 39,864 patches. The training, validation and test data are drawn from non-overlapping ranges of hours, with black out periods of 12 hours in between, over a period of observations of 3 years from July 2017 to August 2020. This ensures that the model does not learn any spurious training and evaluation correlations within any single day. HRRR only generates forecasts starting at full hours."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Jacob Bieker (jacob@openclimatefix.org)
MetNet-1 split: MetNet Authors
MetNet-2 split: MetNet-2 Authors
### Licensing Information
All data is open and without restrictions from NOAA.
### Citation Information
Please cite NOAA as the data provider. | 5,405 | [
[
-0.03643798828125,
-0.0217742919921875,
0.02789306640625,
0.0138092041015625,
-0.034759521484375,
-0.017303466796875,
-0.0164642333984375,
-0.026824951171875,
0.00904083251953125,
0.038330078125,
-0.06378173828125,
-0.04058837890625,
-0.041412353515625,
-0.0... |
pietrolesci/dialogue_nli | 2022-04-25T08:39:10.000Z | [
"region:us"
] | pietrolesci | null | null | 2 | 15 | 2022-04-25T08:21:01 | ## Overview
Original dataset available [here](https://wellecks.github.io/dialogue_nli/).
## Dataset curation
Original `label` column is renamed `original_label`. The original classes are renamed as follows
```
{"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
```
and encoded with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
and stored in the newly created column `label`.
The following splits and the corresponding columns are present in the original files
```
train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
verified_test {'dtype', 'annotation3', 'sentence1', 'sentence2', 'annotation1', 'annotation2', 'original_label', 'label', 'triple2', 'triple1'}
extra_test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_havenot {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_attributes {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_likedislike {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
```
Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3}" from `verified_test`.
Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2".
## Code to create dataset
```python
import pandas as pd
from pathlib import Path
import json
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, Sequence
# load data
ds = {}
for path in Path(".").rglob("<path to folder>/*.jsonl"):
print(path, flush=True)
with path.open("r") as fl:
data = fl.read()
try:
d = json.loads(data, encoding="utf-8")
except json.JSONDecodeError as error:
print(error)
df = pd.DataFrame(d)
# encode labels
df["original_label"] = df["label"]
df["label"] = df["label"].map({"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds[path.name.split(".")[0]] = df
# prettify names of data splits
datasets = {
k.replace("dialogue_nli_", "").replace("uu_", "").lower(): v
for k, v in ds.items()
}
datasets.keys()
#> dict_keys(['train', 'dev', 'test', 'verified_test', 'extra_test', 'extra_dev', 'extra_train', 'valid_havenot', 'valid_attributes', 'valid_likedislike'])
# cast to datasets using only common columns
features = Features({
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"sentence1": Value(dtype="string", id=None),
"sentence2": Value(dtype="string", id=None),
"triple1": Sequence(feature=Value(dtype="string", id=None), length=3),
"triple2": Sequence(feature=Value(dtype="string", id=None), length=3),
"dtype": Value(dtype="string", id=None),
"id": Value(dtype="string", id=None),
"original_label": Value(dtype="string", id=None),
})
ds = {}
for name, df in datasets.items():
if "id" not in df.columns:
df["id"] = ""
ds[name] = Dataset.from_pandas(df.loc[:, list(features.keys())], features=features)
ds = DatasetDict(ds)
ds.push_to_hub("dialogue_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["original_label", "sentence1", "sentence2"],
how="inner",
).shape[0],
)
#> train - dev: 58
#> train - test: 98
#> train - verified_test: 90
#> train - extra_test: 0
#> train - extra_dev: 0
#> train - extra_train: 0
#> train - valid_havenot: 0
#> train - valid_attributes: 0
#> train - valid_likedislike: 0
#> dev - test: 19
#> dev - verified_test: 19
#> dev - extra_test: 0
#> dev - extra_dev: 75
#> dev - extra_train: 75
#> dev - valid_havenot: 75
#> dev - valid_attributes: 75
#> dev - valid_likedislike: 75
#> test - verified_test: 12524
#> test - extra_test: 34
#> test - extra_dev: 0
#> test - extra_train: 0
#> test - valid_havenot: 0
#> test - valid_attributes: 0
#> test - valid_likedislike: 0
#> verified_test - extra_test: 29
#> verified_test - extra_dev: 0
#> verified_test - extra_train: 0
#> verified_test - valid_havenot: 0
#> verified_test - valid_attributes: 0
#> verified_test - valid_likedislike: 0
#> extra_test - extra_dev: 0
#> extra_test - extra_train: 0
#> extra_test - valid_havenot: 0
#> extra_test - valid_attributes: 0
#> extra_test - valid_likedislike: 0
#> extra_dev - extra_train: 250946
#> extra_dev - valid_havenot: 250946
#> extra_dev - valid_attributes: 250946
#> extra_dev - valid_likedislike: 250946
#> extra_train - valid_havenot: 250946
#> extra_train - valid_attributes: 250946
#> extra_train - valid_likedislike: 250946
#> valid_havenot - valid_attributes: 250946
#> valid_havenot - valid_likedislike: 250946
#> valid_attributes - valid_likedislike: 250946
``` | 5,560 | [
[
-0.046234130859375,
-0.036376953125,
0.01488494873046875,
0.01861572265625,
-0.0143280029296875,
-0.003997802734375,
-0.016754150390625,
-0.01277923583984375,
0.039154052734375,
0.0390625,
-0.041351318359375,
-0.056182861328125,
-0.045440673828125,
0.0321655... |
Saptarshi7/covid_qa_cleaned_CS | 2023-10-31T20:58:52.000Z | [
"license:apache-2.0",
"region:us"
] | Saptarshi7 | Cleaned version of COVID-QA containing fixes as mentioned in <paper yet to be published>. | null | 0 | 15 | 2022-05-04T19:04:01 | ---
license: apache-2.0
---
This is the _cleaned_ version of Deepset's [COVID-QA dataset](https://aclanthology.org/2020.nlpcovid19-acl.18/) which is described in section 3.2 of [Leveraging External Knowledge Resources to Enable Domain-Specific Comprehension](https://lifelong-ml.cc/virtual-2022/poster_78.html). While you can use either version of the dataset, we recommend using this version since we have corrected many mistakes in the original question set & also, it yields better scores, even zero-shot, as reported in our paper.
| 537 | [
[
-0.021240234375,
-0.04718017578125,
0.0038661956787109375,
-0.01373291015625,
-0.01468658447265625,
0.00397491455078125,
0.0037097930908203125,
-0.042877197265625,
0.01380157470703125,
0.047943115234375,
-0.0557861328125,
-0.0199127197265625,
0.00166702270507812... |
Lehrig/Monkey-Species-Collection | 2022-05-30T12:33:12.000Z | [
"region:us"
] | Lehrig | This dataset is intended as a test case for fine-grain classification tasks (10 different kinds of monkey species). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 10 categories labeled as n0~n9, each corresponding a species from [Wikipedia's monkey cladogram](https://en.wikipedia.org/wiki/Monkey). Images were downloaded with help of the [googliser](https://github.com/teracow/googliser) open source code.
| Label | Latin Name | Common Name | Train Images | Validation Images |
| ----- | --------------------- | ------------------------- | ------------ | ----------------- |
| n0 | alouatta_palliata | mantled_howler | 131 | 26 |
| n1 | erythrocebus_patas | patas_monkey | 139 | 28 |
| n2 | cacajao_calvus | bald_uakari | 137 | 27 |
| n3 | macaca_fuscata | japanese_macaque | 152 | 30 |
| n4 | cebuella_pygmea | pygmy_marmoset | 131 | 26 |
| n5 | cebus_capucinus | white_headed_capuchin | 141 | 28 |
| n6 | mico_argentatus | silvery_marmoset | 132 | 26 |
| n7 | saimiri_sciureus | common_squirrel_monkey | 142 | 28 |
| n8 | aotus_nigriceps | black_headed_night_monkey | 133 | 27 |
| n9 | trachypithecus_johnii | nilgiri_langur | 132 | 26 |
This collection includes the following GTZAN variants:
* original (images are 400x300 px or larger; ~550 MB)
* downsized (images are downsized to 224x224 px; ~40 MB) | @misc{kaggle-10-monkey-species,
title={Kaggle: 10 Monkey Species},
howpublished={\\url{https://www.kaggle.com/datasets/slothkong/10-monkey-species}},
note = {Accessed: 2022-05-30},
} | 1 | 15 | 2022-05-30T11:14:20 | annotations_creators:
- expert-generated
language_creators:
- expert-generated
languages: []
licenses:
- cc0-1.0
multilinguality: []
pretty_name: Monkey-Species-Collection
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
# Dataset Card for Monkey-Species-Collection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/slothkong/10-monkey-species
- **Repository:** https://github.com/slothkong/CNN_classification_10_monkey_species
- **Paper:** @misc{kaggle-10-monkey-species,
title={Kaggle: 10 Monkey Species},
howpublished={\\url{https://www.kaggle.com/datasets/slothkong/10-monkey-species}},
note = {Accessed: 2022-05-30},
}
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is intended as a test case for fine-grain classification tasks (10 different kinds of monkey species). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 10 categories labeled as n0~n9, each corresponding a species from [Wikipedia's monkey cladogram](https://en.wikipedia.org/wiki/Monkey). Images were downloaded with help of the [googliser](https://github.com/teracow/googliser) open source code.
| Label | Latin Name | Common Name | Train Images | Validation Images |
| ----- | --------------------- | ------------------------- | ------------ | ----------------- |
| n0 | alouatta_palliata | mantled_howler | 131 | 26 |
| n1 | erythrocebus_patas | patas_monkey | 139 | 28 |
| n2 | cacajao_calvus | bald_uakari | 137 | 27 |
| n3 | macaca_fuscata | japanese_macaque | 152 | 30 |
| n4 | cebuella_pygmea | pygmy_marmoset | 131 | 26 |
| n5 | cebus_capucinus | white_headed_capuchin | 141 | 28 |
| n6 | mico_argentatus | silvery_marmoset | 132 | 26 |
| n7 | saimiri_sciureus | common_squirrel_monkey | 142 | 28 |
| n8 | aotus_nigriceps | black_headed_night_monkey | 133 | 27 |
| n9 | trachypithecus_johnii | nilgiri_langur | 132 | 26 |
This collection includes the following GTZAN variants:
* original (images are 400x300 px or larger; ~550 MB)
* downsized (images are downsized to 224x224 px; ~40 MB)
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 4,741 | [
[
-0.044952392578125,
-0.03271484375,
-0.004261016845703125,
0.01555633544921875,
-0.024444580078125,
0.01666259765625,
-0.0280914306640625,
-0.041168212890625,
0.0286102294921875,
0.031707763671875,
-0.031005859375,
-0.053924560546875,
-0.049652099609375,
0.0... |
BeIR/signal1m-generated-queries | 2022-10-23T06:14:43.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 0 | 15 | 2022-06-17T13:20:10 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.01094818115234375,
0.0036602020263671875,
0.00423431396484375,
0.00009590387344360352,
-0.0081939697265625,
-0.0188751220703125,
0.021697998046875,
0.00595855712890625,
-0.034332275390625,
-0.0545654296875,
-0.0263824462890... |
fever/feverous | 2022-10-25T05:50:36.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"knowledge-verification",
"arxiv:2106.05707",
"region:us... | fever | null | null | 2 | 15 | 2022-06-23T14:46:02 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
paperswithcode_id: feverous
pretty_name: FEVEROUS
tags:
- knowledge-verification
---
# Dataset Card for FEVEROUS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fever.ai/dataset/feverous.html
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information](https://arxiv.org/abs/2106.05707)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact
verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of
sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes,
or does not provide enough information to reach a verdict. The dataset also contains annotation metadata such as
annotator actions (query keywords, clicks on page, time signatures), and the type of challenge each claim poses.
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 187.82 MB
- **Size of the generated dataset:** 123.25 MB
- **Total amount of disk used:** 311.07 MB
An example of 'wikipedia_pages' looks as follows:
```
{'id': 24435,
'label': 1,
'claim': 'Michael Folivi competed with ten teams from 2016 to 2021, appearing in 54 games and making seven goals in total.',
'evidence': [{'content': ['Michael Folivi_cell_1_2_0',
'Michael Folivi_cell_1_7_0',
'Michael Folivi_cell_1_8_0',
'Michael Folivi_cell_1_9_0',
'Michael Folivi_cell_1_12_0'],
'context': [['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0']]},
{'content': ['Michael Folivi_cell_0_13_1',
'Michael Folivi_cell_0_14_1',
'Michael Folivi_cell_0_15_1',
'Michael Folivi_cell_0_16_1',
'Michael Folivi_cell_0_18_1'],
'context': [['Michael Folivi_title',
'Michael Folivi_header_cell_0_13_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_14_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_15_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_16_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_18_0',
'Michael Folivi_header_cell_0_11_0']]}],
'annotator_operations': [{'operation': 'start',
'value': 'start',
'time': 0.0},
{'operation': 'Now on', 'value': '?search=', 'time': 0.78},
{'operation': 'search', 'value': 'Michael Folivi', 'time': 78.101},
{'operation': 'Now on', 'value': 'Michael Folivi', 'time': 78.822},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_2_0',
'time': 96.202},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_7_0',
'time': 96.9},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_8_0',
'time': 97.429},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_9_0',
'time': 97.994},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_12_0',
'time': 99.02},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_13_1',
'time': 106.108},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_14_1',
'time': 106.702},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_15_1',
'time': 107.423},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_16_1',
'time': 108.186},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_17_1',
'time': 108.788},
{'operation': 'Highlighting',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 108.8},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_18_1',
'time': 109.469},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_cell_0_17_1',
'time': 124.28},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 124.293},
{'operation': 'finish', 'value': 'finish', 'time': 141.351}],
'expected_challenge': '',
'challenge': 'Numerical Reasoning'}
```
### Data Fields
The data fields are the same among all splits.
- `id` (int): ID of the sample.
- `label` (ClassLabel): Annotated label for the claim. Can be one of {"SUPPORTS", "REFUTES", "NOT ENOUGH INFO"}.
- `claim` (str): Text of the claim.
- `evidence` (list of dict): Evidence sets (at maximum three). Each set consists of dictionaries with two fields:
- `content` (list of str): List of element IDs serving as the evidence for the claim. Each element ID is in the format
`"[PAGE ID]_[EVIDENCE TYPE]_[NUMBER ID]"`, where `[EVIDENCE TYPE]` can be: `sentence`, `cell`, `header_cell`,
`table_caption`, `item`.
- `context` (list of list of str): List (for each element ID in `content`) of a list of Wikipedia elements that are
automatically associated with that element ID and serve as context. This includes an article's title, relevant
sections (the section and sub-section(s) the element is located in), and for cells the closest row and column
header (multiple row/column headers if they follow each other).
- `annotator_operations` (list of dict): List of operations an annotator used to find the evidence and reach a verdict,
given the claim. Each element in the list is a dictionary with the fields:
- `operation` (str): Operation name. Any of the following:
- `start`, `finish`: Annotation started/finished. The value is the name of the operation.
- `search`: Annotator used the Wikipedia search function. The value is the entered search term or the term selected
from the automatic suggestions. If the annotator did not select any of the suggestions but instead went into
advanced search, the term is prefixed with "contains...".
- `hyperlink`: Annotator clicked on a hyperlink in the page. The value is the anchor text of the hyperlink.
- `Now on`: The page the annotator has landed after a search or a hyperlink click. The value is the PAGE ID.
- `Page search`: Annotator search on a page. The value is the search term.
- `page-search-reset`: Annotator cleared the search box. The value is the name of the operation.
- `Highlighting`, `Highlighting deleted`: Annotator selected/unselected an element on the page. The value is
`ELEMENT ID`.
- `back-button-clicked`: Annotator pressed the back button. The value is the name of the operation.
- `value` (str): Value associated with the operation.
- `time` (float): Time in seconds from the start of the annotation.
- `expected_challenge` (str): The challenge the claim generator selected will be faced when verifying the claim, one
out of the following: `Numerical Reasoning`, `Multi-hop Reasoning`, `Entity Disambiguation`,
`Combining Tables and Text`, `Search terms not in claim`, `Other`.
- `challenge` (str): Main challenge to verify the claim, one out of the following: `Numerical Reasoning`,
`Multi-hop Reasoning`, `Entity Disambiguation`, `Combining Tables and Text`, `Search terms not in claim`, `Other`.
### Data Splits
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 71291 | 7890 | 7845 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use this dataset, please cite:
```bibtex
@inproceedings{Aly21Feverous,
author = {Aly, Rami and Guo, Zhijiang and Schlichtkrull, Michael Sejr and Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Cocarascu, Oana and Mittal, Arpit},
title = {{FEVEROUS}: Fact Extraction and {VERification} Over Unstructured and Structured information},
eprint={2106.05707},
archivePrefix={arXiv},
primaryClass={cs.CL},
year = {2021}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| 13,479 | [
[
-0.037200927734375,
-0.038787841796875,
0.0186004638671875,
0.01538848876953125,
-0.0182952880859375,
0.007080078125,
-0.0077362060546875,
-0.0250701904296875,
0.049072265625,
0.010772705078125,
-0.05462646484375,
-0.06549072265625,
-0.036529541015625,
0.031... |
Paul/hatecheck-portuguese | 2022-07-05T10:27:47.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | 2 | 15 | 2022-07-05T10:21:24 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Portuguese HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. | 3,493 | [
[
-0.046661376953125,
-0.05206298828125,
-0.0040130615234375,
0.006702423095703125,
-0.008392333984375,
0.00782012939453125,
-0.0022068023681640625,
-0.037109375,
0.0290679931640625,
0.023834228515625,
-0.055206298828125,
-0.056182861328125,
-0.040863037109375,
... |
VietAI/spoken_norm_assignment | 2022-07-12T13:33:30.000Z | [
"region:us"
] | VietAI | null | null | 3 | 15 | 2022-07-12T13:03:29 | # VietAI assignment: Vietnamese Inverse Text Normalization dataset
## Dataset Description
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
| Spoken | Written | Types |
|--------------------------------------------------|--------------|----------------------------|
| tám giờ chín phút ngày ba tháng tư năm hai nghìn | 8h9 3/4/2000 | time and date |
| tám mét khối năm mươi ki lô gam | 8m3 50 kg | number and unit of measure |
| không chín sáu hai bảy bảy chín chín không bốn | 0962779904 | phone number |
### Data Splits
The ITN dataset has 3 splits: _train_, _validation_, and _test_. In _train_, _validation_ splits, the input (src) and their label (tgt) are provided. In the _test_ splits, only the input (src) is provided.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 500,000 |
| Validation | 2,500 |
| Test | 2,500 |
| 1,508 | [
[
-0.0123443603515625,
-0.030914306640625,
0.00604248046875,
0.01611328125,
-0.042205810546875,
-0.03167724609375,
-0.01806640625,
0.01328277587890625,
0.005157470703125,
0.047210693359375,
-0.0284271240234375,
-0.06317138671875,
-0.035491943359375,
0.01780700... |
crystina-z/mrtydi-mContriever-mmarco-HN | 2022-07-14T20:00:39.000Z | [
"region:us"
] | crystina-z | null | null | 0 | 15 | 2022-07-14T07:34:00 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
KaranChand/atcosim_split | 2022-08-01T15:06:09.000Z | [
"region:us"
] | KaranChand | null | null | 0 | 15 | 2022-08-01T15:05:53 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
iejMac/CLIP-WebVid | 2022-10-04T09:10:24.000Z | [
"region:us"
] | iejMac | null | null | 4 | 15 | 2022-08-25T23:31:56 | Found. Redirecting to https://cdn-lfs.huggingface.co/repos/3d/61/3d617fdf7011c53c4509c489d7d1a33e91ace469207316cf549c728f972f43b5/3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1699233958&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5OTIzMzk1OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy8zZC82MS8zZDYxN2ZkZjcwMTFjNTNjNDUwOWM0ODlkN2QxYTMzZTkxYWNlNDY5MjA3MzE2Y2Y1NDljNzI4Zjk3MmY0M2I1LzNlMGUxNWZhMGM1Y2M4MTY3NWJkNjlhZjhlYjQ2OWQxMjhhNzI1YzFhN2JmYzcxZjAzYjc4NzdiN2I2NTA1Njc%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=qaTNupWj4r6BEPWVFcMb2GsqBkXHjMo6wCsqVYawqFaIGgaGmljGYLupHZ77JW16--PFvi2mOkVYgFfAbjb57pmXxIMA5KGp3t-5F4hZzCu1yATWdBJvdjr199g4YQCa9-44SBwOCViM8ueIGDM8ruOMUTsQ3D4dtntZZ3DMKElfsibc8QUoKbBJXw-hxjsCspqLmkgOIXkB1mveP4YWCxs3uO3ZAF9ZPDHSS1lrPUrbtDQ3G8bBAlDdfxcKRYfKFhRNy-p5JB1vkxS8x6xm7U7AJHVBePtUcmy1YJa96lZtcqOWFalAX275xGTALfXVE6OORe2vfQdp-n5nft590w__&Key-Pair-Id=KVTP0A1DKRTAX | 1,175 | [
[
-0.0426025390625,
-0.058135986328125,
0.04595947265625,
0.02215576171875,
-0.039154052734375,
-0.00189971923828125,
0.017791748046875,
-0.017791748046875,
0.058135986328125,
0.05145263671875,
-0.069580078125,
-0.05615234375,
-0.03607177734375,
0.037506103515... |
PlanTL-GOB-ES/CoNLL-NERC-es | 2022-11-18T11:55:41.000Z | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | PlanTL-GOB-ES | Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition.
We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
The participants of the shared task will be offered training and test data for at least two languages.
They will use the data for developing a named-entity recognition system that includes a machine learning component.
Information sources other than the training data may be used in this shared task.
We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
The train/validation/test sets are available in Spanish and Dutch.
For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/ | @inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
} | 2 | 15 | 2022-10-28T10:42:01 | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
multilinguality:
- monolingual
pretty_name: CoNLL-NERC-es
size_categories: []
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
---
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras](carreras@lsi.upc.es)
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
| 4,974 | [
[
-0.0413818359375,
-0.0440673828125,
0.01031494140625,
0.03424072265625,
-0.0093536376953125,
0.00421905517578125,
-0.03814697265625,
-0.046112060546875,
0.0423583984375,
0.048614501953125,
-0.04052734375,
-0.056671142578125,
-0.050018310546875,
0.03726196289... |
severo/glue | 2022-10-28T16:35:04.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monol... | severo | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | 0 | 15 | 2022-10-28T21:00:14 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | 21,988 | [
[
-0.0303192138671875,
-0.057525634765625,
0.00933837890625,
0.01531982421875,
-0.006053924560546875,
-0.004276275634765625,
-0.0121917724609375,
-0.0306243896484375,
0.0269012451171875,
0.03167724609375,
-0.058563232421875,
-0.0540771484375,
-0.035980224609375,
... |
rufimelo/PortugueseLegalSentences-v3 | 2022-11-01T13:15:47.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | rufimelo | null | null | 3 | 15 | 2022-11-01T13:06:19 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| 472 | [
[
0.024444580078125,
-0.03790283203125,
0.0170135498046875,
0.012969970703125,
-0.03546142578125,
-0.015106201171875,
-0.00844573974609375,
-0.0082244873046875,
0.0205230712890625,
0.095947265625,
-0.049560546875,
-0.055938720703125,
-0.039306640625,
0.0066413... |
jinhybr/rvl_cdip_400_train_val_test | 2022-11-11T15:58:02.000Z | [
"region:us"
] | jinhybr | null | null | 0 | 15 | 2022-11-11T04:01:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: letter
1: form
2: email
3: handwritten
4: advertisement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 197669272.0
num_examples: 1600
- name: train
num_bytes: 781258280.0
num_examples: 6400
- name: validation
num_bytes: 191125740.0
num_examples: 1600
download_size: 1101475597
dataset_size: 1170053292.0
---
# Dataset Card for "rvl_cdip_400_train_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Format
````
DatasetDict({
test: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
train: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 6400
})
validation: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
})
```` | 1,406 | [
[
-0.055511474609375,
-0.020538330078125,
-0.0017194747924804688,
0.03619384765625,
-0.0241241455078125,
-0.015289306640625,
0.00011962652206420898,
-0.0007138252258300781,
0.00975799560546875,
0.035125732421875,
-0.044097900390625,
-0.052886962890625,
-0.03771972... |
alecsharpie/nailbiting_classification | 2022-11-30T07:12:04.000Z | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"nailbiting",
"image",
"preprocesses",
"region:us"
] | alecsharpie | null | null | 0 | 15 | 2022-11-30T06:02:22 | ---
annotations_creators:
- expert-generated
- machine-generated
language:
- en
language_creators: []
license:
- mit
multilinguality: []
paperswithcode_id: acronym-identification
pretty_name: Nailbiting Classification
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- nailbiting
- image
- preprocesses
task_categories:
- image-classification
task_ids: []
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biting
'1': no_biting
splits:
- name: train
num_bytes: 11965731.715
num_examples: 6629
- name: test
num_bytes: 1485426.0
num_examples: 736
download_size: 11546517
dataset_size: 13451157.715
---
# Dataset Card for Nail Biting Classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/alecsharpie/nailbiting_classification](https://huggingface.co/datasets/alecsharpie/nailbiting_classification)
- **Repository:** [https://github.com/alecsharpie/nomo_nailbiting](https://github.com/alecsharpie/nomo_nailbiting)
- **Point of Contact:** [alecsharpie@gmail.com](alecsharpie@gmail.com)
### Dataset Summary
A binary image dataset for classifying nailbiting. Images are cropped to only show the mouth area.
Should contain edge cases such as drinking water, talking on the phone, scratching chin etc.. all in "no biting" category
## Dataset Structure
### Data Instances
- 7147 Images
- 14879790 bytes total
- 12332617 bytes download
### Data Fields
128 x 64 (w x h, pixels)
Black and white
Labels
- '0': biting
- '1': no_biting
### Data Splits
- train: 6629 (11965737 bytes)
- test: 1471 (2914053 bytes)
## Dataset Creation
### Curation Rationale
I wanted to create a notification system to help me stop biting my nails. It needed to contain lots of possible no-biting scenarios. eg talking on the phone
### Source Data
#### Initial Data Collection and Normalization
The data was scraped from stock images sites and photos of myself were taken with my webcam.
MTCNN (https://github.com/ipazc/mtcnn) was then used to crop the images down to only the show the mouth area
The images were then converted to a black & white colour scheme.
### Annotations
#### Annotation process
During the scraping process images were labelled with a description, which I then manually sanity checked. I labelled the ones of me manually.
#### Who are the annotators?
Alec Sharp
## Considerations for Using the Data
### Discussion of Biases & Limitations
Tried to make the dataset diverse in terms of age and skin tone. Although, this dataset contains a large number of images of one subject (me) so is biased towards lower quality webcam pictures of a white male with a short beard.
### Dataset Curators
Alec Sharp
### Licensing Information
MIT
### Contributions
Thanks to [@alecsharpie](https://github.com/alecsharpie) for adding this dataset. | 4,193 | [
[
-0.01352691650390625,
-0.0533447265625,
0.018707275390625,
0.0293426513671875,
-0.0347900390625,
0.01202392578125,
-0.00653076171875,
-0.04107666015625,
0.074951171875,
0.016754150390625,
-0.03363037109375,
-0.08941650390625,
-0.039398193359375,
0.0058288574... |
shunk031/jsnli | 2022-12-12T07:36:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-sa-4.0",
"natural-language-inference",
"nli",
"jsnli",
"region:us"
] | shunk031 | == 日本語SNLI(JSNLI)データセット ==
SNLI コーパスを日本語に翻訳した自然言語推論データセット
学習データは元データを翻訳し、計算機によるフィルタリングによって作成
評価データは日本語として意味が通るか、翻訳後のラベルが元のラベルと一致しているかどうかの2段階のクラウドソーシングによりデータをフィルタリング | - 吉越 卓見, 河原 大輔, 黒橋 禎夫: 機械翻訳を用いた自然言語推論データセットの多言語化, 第244回自然言語処理研究会, (2020.7.3).
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP).
- Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions." Transactions of the Association for Computational Linguistics 2 (2014): 67-78. | 3 | 15 | 2022-11-30T16:34:02 | ---
language:
- ja
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
tags:
- natural-language-inference
- nli
- jsnli
datasets:
- without-filtering
- with-filtering
metrics:
- accuracy
---
# Dataset Card for JSNLI
[](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88
- Repository: https://github.com/shunk031/huggingface-datasets_jsnli
### Dataset Summary
[日本語 SNLI(JSNLI) データセット - KUROHASHI-CHU-MURAWAKI LAB](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 ) より:
> 本データセットは自然言語推論 (NLI) の標準的ベンチマークである [SNLI](https://nlp.stanford.edu/projects/snli/) を日本語に翻訳したものです。
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
注釈はすべて日本語を主要言語としています。
## Dataset Structure
> データセットは TSV フォーマットで、各行がラベル、前提、仮説の三つ組を表します。前提、仮説は JUMAN++ によって形態素分割されています。以下に例をあげます。
```
entailment 自転車 で 2 人 の 男性 が レース で 競い ます 。 人々 は 自転車 に 乗って います 。
```
### Data Instances
```python
from datasets import load_dataset
load_dataset("shunk031/jsnli", "without-filtering")
```
```json
{
'label': 'neutral',
'premise': 'ガレージ で 、 壁 に ナイフ を 投げる 男 。',
'hypothesis': '男 は 魔法 の ショー の ため に ナイフ を 投げる 行為 を 練習 して い ます 。'
}
```
### Data Fields
### Data Splits
| name | train | validation |
|-------------------|--------:|-----------:|
| without-filtering | 548,014 | 3,916 |
| with-filtering | 533,005 | 3,916 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
> SNLI に機械翻訳を適用した後、評価データにクラウドソーシングによる正確なフィルタリング、学習データに計算機による自動フィルタリングを施すことで構築されています。
> データセットは学習データを全くフィルタリングしていないものと、フィルタリングした中で最も精度が高かったものの 2 種類を公開しています。データサイズは、フィルタリング前の学習データが 548,014 ペア、フィルタリング後の学習データが 533,005 ペア、評価データは 3,916 ペアです。詳細は参考文献を参照してください。
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
> 本データセットに関するご質問は nl-resource あっと nlp.ist.i.kyoto-u.ac.jp 宛にお願いいたします。
### Dataset Curators
### Licensing Information
> このデータセットのライセンスは、SNLI のライセンスと同じ [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) に従います。SNLI に関しては参考文献を参照してください。
### Citation Information
```bibtex
@article{吉越卓見 2020 機械翻訳を用いた自然言語推論データセットの多言語化,
title={機械翻訳を用いた自然言語推論データセットの多言語化},
author={吉越卓見 and 河原大輔 and 黒橋禎夫 and others},
journal={研究報告自然言語処理 (NL)},
volume={2020},
number={6},
pages={1--8},
year={2020}
}
```
```bibtex
@inproceedings{bowman2015large,
title={A large annotated corpus for learning natural language inference},
author={Bowman, Samuel and Angeli, Gabor and Potts, Christopher and Manning, Christopher D},
booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
pages={632--642},
year={2015}
}
```
```bibtex
@article{young2014image,
title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions},
author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia},
journal={Transactions of the Association for Computational Linguistics},
volume={2},
pages={67--78},
year={2014},
publisher={MIT Press}
}
```
### Contributions
JSNLI データセットを公開してくださった吉越 卓見さま,河原 大輔さま,黒橋 禎夫さまに心から感謝します。
| 5,016 | [
[
-0.0288238525390625,
-0.0438232421875,
0.0224761962890625,
0.013153076171875,
-0.0207672119140625,
-0.012298583984375,
-0.02874755859375,
-0.03192138671875,
0.057586669921875,
0.013702392578125,
-0.046234130859375,
-0.06182861328125,
-0.03790283203125,
0.020... |
thennal/msc | 2022-12-08T06:49:31.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:ml",
"license:cc-by-sa-4.0",
"region:us"
] | thennal | null | null | 1 | 15 | 2022-12-08T06:19:56 | ---
annotations_creators:
- crowdsourced
language:
- ml
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Swathanthra Malayalam Computing Malayalam Speech Corpus
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
dataset_info:
features:
- name: speechid
dtype: string
- name: speaker_id
dtype: string
- name: review_score
dtype: int64
- name: transcript
dtype: string
- name: category
dtype: string
- name: speaker_gender
dtype: string
- name: speaker_age
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
splits:
- name: train
num_bytes: 581998721.306
num_examples: 1541
download_size: 422643542
dataset_size: 581998721.306
---
# SMC Malayalam Speech Corpus
Malayalam Speech Corpus (MSC) is a repository of curated speech samples collected using MSC web application, released by Swathanthra Malayalam Computing.
The official blog post and source data can be found at [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/).
## Dataset Description
- **Homepage:** [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/)
### Dataset Summary
The first version of Malayalam Speech Corpus contains 1541 speech samples from 75 contributors amounting to 1:38:16 hours of speech. It has 482 unique sentences, 1400 unique words, 553 unique syllables and 48 unique phonemes.
| 1,575 | [
[
-0.0298004150390625,
-0.0311431884765625,
0.01267242431640625,
0.0250091552734375,
-0.0280303955078125,
0.022125244140625,
-0.0033721923828125,
-0.0023059844970703125,
0.03802490234375,
0.030242919921875,
-0.055084228515625,
-0.038818359375,
-0.03289794921875,
... |
Anthropic/model-written-evals | 2022-12-21T02:33:18.000Z | [
"task_categories:multiple-choice",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monol... | Anthropic | null | null | 29 | 15 | 2022-12-21T00:01:13 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Evaluations from "Discovering Language Model Behaviors with Model-Written
Evaluations"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- gender bias
- social bias
- AI safety
- personality
- politics
task_categories:
- multiple-choice
- zero-shot-classification
- question-answering
task_ids:
- multiple-choice-qa
- multiple-choice-coreference-resolution
---
# Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations."
We intend the datasets to be useful to:
1. Those who are interested in understanding the quality and properties of model-generated data
2. Those who wish to use our datasets to evaluate other models for the behaviors we examined in our work (e.g., related to model persona, sycophancy, advanced AI risks, and gender bias)
The evaluations were generated to be asked to dialogue agents (e.g., a model finetuned explicitly respond to a user's utterances, or a pretrained language model prompted to behave like a dialogue agent). However, it is possible to adapt the data to test other kinds of models as well.
We describe each of our collections of datasets below:
1. `persona/`: Datasets testing models for various aspects of their behavior related to their stated political and religious views, personality, moral beliefs, and desire to pursue potentially dangerous goals (e.g., self-preservation or power-seeking).
2. `sycophancy/`: Datasets testing models for whether or not they repeat back a user's view to various questions (in philosophy, NLP research, and politics)
3. `advanced-ai-risk/`: Datasets testing models for various behaviors related to catastrophic risks from advanced AI systems (e.g., ). These datasets were generated in a few-shot manner. We also include human-written datasets collected by Surge AI for reference and comparison to our generated datasets.
4. `winogenerated/`: Our larger, model-generated version of the Winogender Dataset ([Rudinger et al., 2018](https://arxiv.org/abs/1804.09301)). We also include the names of occupation titles that we generated, to create the dataset (alongside occupation gender statistics from the Bureau of Labor Statistics)
Please see our paper for additional details on the datasets, how we generated them, human validation metrics, and other analyses of the datasets.
**Disclaimer**: As discussed in our paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
## Contact
For questions, please email `ethan at anthropic dot com`
## Bibtex Citation
If you would like to cite our work or data, you may use the following bibtex citation:
```
@misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| 4,720 | [
[
-0.0199127197265625,
-0.029205322265625,
0.0291748046875,
-0.003269195556640625,
0.0213775634765625,
0.0019626617431640625,
0.00241851806640625,
-0.027801513671875,
0.01271820068359375,
0.031463623046875,
-0.04571533203125,
-0.048187255859375,
-0.030410766601562... |
irds/vaswani | 2023-01-05T03:56:04.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 1 | 15 | 2023-01-05T03:55:59 | ---
pretty_name: '`vaswani`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `vaswani`
The `vaswani` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/vaswani#vaswani).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=11,429
- `queries` (i.e., topics); count=93
- `qrels`: (relevance assessments); count=2,083
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/vaswani', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/vaswani', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/vaswani', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| 1,098 | [
[
-0.015899658203125,
-0.00897979736328125,
-0.0064544677734375,
0.01354217529296875,
-0.01849365234375,
-0.012908935546875,
-0.0108642578125,
0.0022411346435546875,
0.018218994140625,
0.046630859375,
-0.034881591796875,
-0.048828125,
-0.023193359375,
0.012084... |
Zaid/xquad_ar | 2023-01-05T07:17:58.000Z | [
"region:us"
] | Zaid | null | null | 0 | 15 | 2023-01-05T07:17:31 | ---
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 1394144.8109243698
num_examples: 963
- name: validation
num_bytes: 172277.5
num_examples: 119
- name: test
num_bytes: 156352.68907563025
num_examples: 108
download_size: 406718
dataset_size: 1722775.0
---
# Dataset Card for "xquad_ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 683 | [
[
-0.04840087890625,
-0.00714874267578125,
0.0063629150390625,
0.00611114501953125,
-0.0153045654296875,
0.01116943359375,
0.03662109375,
-0.005733489990234375,
0.05645751953125,
0.02447509765625,
-0.0390625,
-0.05413818359375,
-0.039276123046875,
-0.031433105... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.