id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
versae/norwegian-paws-x | 2023-09-23T11:03:06.000Z | [
"region:us"
] | versae | Norwegian PAWS-X (GCP), Bokmaal and Nynorsk machine-translated versions of PAWS-X.
PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages.
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine
translated training pairs in six typologically distinct languages: French, Spanish, German,
Chinese, Japanese, and Korean. English language is available by default. All translated
pairs are sourced from examples in PAWS-Wiki.
For further details, see the accompanying paper: PAWS-X: A Cross-lingual Adversarial Dataset
for Paraphrase Identification (https://arxiv.org/abs/1908.11828)
NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1. | @InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
} | null | 0 | 19 | Entry not found |
tahercoolguy/lawyers_demo | 2023-09-25T07:45:25.000Z | [
"language:ar",
"language:en",
"license:gpl",
"region:us"
] | tahercoolguy | null | null | null | 0 | 19 | ---
language:
- ar
- en
license: gpl
dataset_info:
features:
- name: text
dtype: string
- name: document_name
dtype: string
- name: pages
dtype: int64
- name: __index_level_0__
dtype: int64
- name: input
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 10216196
num_examples: 1724
download_size: 8676904
dataset_size: 10216196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nicolas-BZRD/BALO_opendata | 2023-09-28T19:03:01.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"finance",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 19 | ---
language:
- fr
license: odc-by
size_categories:
- 100K<n<1M
pretty_name: Bulletin of mandatory legal notices
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1106418284
num_examples: 135575
download_size: 439587100
dataset_size: 1106418284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- finance
- legal
---
# BALO (Bulletin of mandatory legal notices)
Announcements published in the [BALO](https://www.data.gouv.fr/en/datasets/balo/) (Bulletin des annonces légales obligatoires).
The BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings. |
tyzhu/squad_title_v3_train_30_eval_10 | 2023-09-26T08:01:41.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 19 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 658246
num_examples: 378
- name: validation
num_bytes: 68651
num_examples: 60
download_size: 123968
dataset_size: 726897
---
# Dataset Card for "squad_title_v3_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wikipunk/yago45en | 2023-09-28T16:37:11.000Z | [
"task_categories:graph-ml",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"size_categories:100M<n<1B",
"source_datasets:wikidata",
"language:en",
"license:cc-by-sa-3.0",
"knowledge-graph",
"rdf",
"triples",
"region:us"
] | wikipunk | null | null | null | 7 | 19 | ---
language:
- en
license: cc-by-sa-3.0
license_link: https://creativecommons.org/licenses/by-sa/3.0/
tags:
- knowledge-graph
- rdf
- triples
annotations_creators:
- crowdsourced
- expert-generated
source_datasets:
- wikidata
pretty_name: YAGO 4.5 (EN)
size_categories:
- 100M<n<1B
task_categories:
- graph-ml
dataset_info:
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
config_name: default
splits:
- name: train
num_bytes: 42709902295
num_examples: 249675587
dataset_size: 42709902295
viewer: false
---
# YAGO 4.5 Dataset (English subset for LLM fine-tuning)
To utilize the YAGO 4.5 (EN) Dataset, users should ensure they have the following prerequisites installed:
### Software
- Python (Tested with 3.10)
- [Hugging Face Datasets
Library](https://huggingface.co/docs/datasets/): Required for loading and processing the dataset.
```sh
pip install datasets
pip install rdflib
```
### Hardware
* Sufficient Storage: The dataset is approximately 43 GB, ensure you
have enough storage space to download and extract the dataset.
* Multi-core Processor: For efficient data loading and processing, a
multi-core processor is recommended. The more threads the faster you
can load the dataset.
## Dataset Description
This dataset contains triples filtered from yago-facts.ttl and
yago-beyond-wikipedia.ttl in the YAGO 4.5 dataset. The SPARQL query
used to filter the triples is in `filter.sparql`. This represents
a subset of the YAGO 4.5 dataset maintaining only English labels.
I remapped some schema.org properties to
`http://yago-knowledge.org/resource/` which were not present in the
schema.org vocabulary. I also removed schema:sameAs and owl:sameAs
relations from this dataset, as well as triples with xsd:anyURI object
literals, as my goal is to use this dataset for fine-tuning a large
language model for knowledge graph completion and I do not want
to train the base model to predict these kind of relations.
### Overview
YAGO 4.5 is the latest version of the YAGO knowledge base. It is
based on Wikidata — the largest public general-purpose knowledge
base. YAGO refines the data as follows:
* All entity identifiers and property identifiers are human-readable.
* The top-level classes come from schema.org — a standard repertoire
of classes and properties maintained by Google and others. The lower
level classes are a careful selection of the Wikidata taxonomy.
* The properties come from schema.org.
* YAGO 4.5 contains semantic constraints in the form of SHACL. These
constraints keep the data clean, and allow for logical reasoning on
YAGO.
### Dataset Structure
The dataset is structured as follows:
- **yago-taxonomy.ttl:** Contains the `rdfs:subClassOf` relations
for YAGO and the prefix mappings for the N-Triples.
- **facts.tar.gz:** Compressed file containing chunks of the
dataset in N-Triples format, representing the factual knowledge in
YAGO.
### Features
Each RDF triple in the dataset is represented with the following features:
- **subject:** The subject of the triple, representing the entity.
- **predicate:** The predicate of the triple, representing the
relationship between the subject and object.
- **object:** The object of the triple, representing the entity or
value linked by the predicate.
### Chunks
The dataset is logically divided into multiple chunks, each containing
a subset of RDF triples. Users can load specific chunks or the entire
dataset based on their requirements.
## Usage
### Loading the Dataset
The dataset can be loaded using the Hugging Face `datasets` library as follows:
```python
from datasets import load_dataset
dataset = load_dataset('wikipunk/yago45en', num_proc=4, split='train')
```
``` python
# Accessing the first row of the dataset
first_row = dataset[0]
# Output: {'subject': '<http://yago-knowledge.org/resource/Sdsscgb_11322_U002E_4_Q85387516>',
# 'predicate': '<http://www.w3.org/2000/01/rdf-schema#comment>',
# 'object': '"galaxy"@en'}
```
## Additional Information
### Licensing
The YAGO 4.5 dataset is available under the [Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation
If you use the YAGO 4.5 dataset in your work, please cite the
following publication:
```bibtex
@article{suchanek2023integrating,
title={Integrating the Wikidata Taxonomy into YAGO},
author={Suchanek, Fabian M and Alam, Mehwish and Bonald, Thomas and Paris, Pierre-Henri and Soria, Jules},
journal={arXiv preprint arXiv:2308.11884},
year={2023}
}
```
|
gayanin/pubmed-abstracts | 2023-09-27T22:44:14.000Z | [
"region:us"
] | gayanin | null | null | null | 0 | 19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: refs
dtype: string
splits:
- name: train
num_bytes: 9419993
num_examples: 74724
- name: test
num_bytes: 1206965
num_examples: 9341
- name: validation
num_bytes: 1239760
num_examples: 9341
download_size: 6522287
dataset_size: 11866718
---
# Dataset Card for "pubmed-abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kewu93/dreambooth | 2023-09-28T16:38:30.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 63956933.0
num_examples: 90
- name: val
num_bytes: 47721308.0
num_examples: 68
download_size: 111584859
dataset_size: 111678241.0
---
# Dataset Card for "dreambooth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dloring1/OpenOrca-mini-1 | 2023-09-29T23:53:04.000Z | [
"region:us"
] | Dloring1 | null | null | null | 0 | 19 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 762396101
num_examples: 423392
download_size: 435767099
dataset_size: 762396101
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "OpenOrca-mini-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PranavVerma-droid/llama2-7b-training | 2023-10-10T06:12:10.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"llama2",
"region:us"
] | PranavVerma-droid | null | null | null | 0 | 19 | ---
license: mit
language:
- en
tags:
- llama2
size_categories:
- 1K<n<10K
---
This is a General-Purpouse Dataset Made for Llama2. This Includes Information About Math, Real Word Events, Science, Instructions to Do Things in Real Life, etc.
This Database has No Foul Language or Spilled Data, it is completely safe and open-source to use!
Written by [PranavVerma-droid](https://portfolio.craftingrealm.tk) <br>
This Code is Licensed, Please Use With Crediting the Owner. |
shossain/govreport-qa-16384 | 2023-10-02T05:41:29.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 19 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1541722952
num_examples: 7238
download_size: 215326747
dataset_size: 1541722952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pedropauletti/librispeech-portuguese | 2023-10-02T21:22:08.000Z | [
"region:us"
] | pedropauletti | null | null | null | 0 | 19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence:
sequence: float32
- name: speaker_embeddings
sequence: float32
splits:
- name: train
num_bytes: 1448647649.3426037
num_examples: 4648
- name: test
num_bytes: 161134000.58307362
num_examples: 517
download_size: 1435028926
dataset_size: 1609781649.9256773
---
# Dataset Card for "librispeech-portuguese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shossain/govreport-qa-5-8192 | 2023-10-03T21:26:08.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 19 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 410925
num_examples: 5
download_size: 110024
dataset_size: 410925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-8192"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jangmin/ecommerce_purchase_history_v2 | 2023-10-03T05:31:00.000Z | [
"region:us"
] | jangmin | null | null | null | 0 | 19 | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: day
dtype: string
- name: order_ts
dtype: string
- name: positive_prod_id
dtype: int64
- name: negative_prod_id
dtype: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: effective_order_infos
list:
list:
- name: contents
list:
- name: category_id
dtype: int64
- name: product_id
dtype: int64
- name: text
dtype: string
- name: order_id
dtype: string
- name: order_ts
dtype: timestamp[us]
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 193522291
num_examples: 86264
- name: test
num_bytes: 74028559
num_examples: 21566
- name: conservative_test
num_bytes: 40121578
num_examples: 8236
download_size: 44200184
dataset_size: 307672428
---
# Dataset Card for "ecommerce_purchase_history_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
katielink/medtrain_raw | 2023-10-03T21:59:27.000Z | [
"license:apache-2.0",
"medical",
"region:us"
] | katielink | null | null | null | 0 | 19 | ---
license: apache-2.0
dataset_info:
features:
- name: raw_card
dtype: string
- name: raw_tag
dtype: string
- name: deck
dtype: string
splits:
- name: train
num_bytes: 57798047
num_examples: 118879
download_size: 14194941
dataset_size: 57798047
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- medical
---
# Dataset Card for "medtrain_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
someone13574/topic-to-question | 2023-10-09T03:55:30.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | someone13574 | null | null | null | 0 | 19 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
# Topic -> Question
This dataset consists of just under 10.5k question-topic pairs, for use as prompts in synthetic Q&A datasets. It was generated using [StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2) and the prompt listed below.
## Generation
As stated above, this dataset was created using StableBeluga2. This was done by prompting the model to generate a question to fit a specific topic which were taken from Wikipedia's [Level-4 Vital Articles](https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level/4) as well as a small amount of random articles from the [Electronics](https://en.wikipedia.org/wiki/Category:Electronics) and [Engineering](https://en.wikipedia.org/wiki/Category:Engineering) categories (not vital articles). The article names list was created using [PetScan](https://petscan.wmflabs.org/) and links to the queries are below.
The following prompt was used to generate each question: **"Drawing on your expertise regarding the topic '{topic}', create a thought-provoking question about it that goes beyond basic facts. Your question should encourage deep analysis, critical thinking, and profound understanding. Avoid a question that can be readily answered through a quick search, aiming instead for one that necessitates your expert insights."**
Here is the list of PetScan queries used to obtain the topic list (each topic was only used once):
| Category | All topics? | Query Link |
|--------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| People | Yes | [Link](https://petscan.wmflabs.org/?page_image=any&edits%5Bflagged%5D=both&interface_language=en&min_redlink_count=1&sparql=&show_redirects=both&sortby=none&wpiu=any&search_filter=&referrer_url=&common_wiki_other=&pagepile=&outlinks_no=&search_max_results=500&min_sitelink_count=&langs_labels_any=&cb_labels_no_l=1&output_compatability=catscan&ores_prob_from=&depth=0&sitelinks_yes=&common_wiki=auto&ns%5B0%5D=1&max_sitelink_count=&cb_labels_yes_l=1&before=&language=en&ores_prob_to=&templates_no=&labels_yes=&wikidata_prop_item_use=&cb_labels_any_l=1&labels_no=&active_tab=tab_templates_n_links&search_wiki=&ores_type=any&after=&edits%5Bbots%5D=both&sitelinks_any=&project=wikipedia&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPeople&doit=) |
| History | Yes | [Link](https://petscan.wmflabs.org/?referrer_name=&langs_labels_yes=&wikidata_item=no&wpiu=any&search_max_results=500&larger=&labels_yes=&namespace_conversion=keep&langs_labels_any=&show_redirects=both&cb_labels_any_l=1&cb_labels_no_l=1&ores_prob_from=&max_sitelink_count=&smaller=&show_soft_redirects=both&links_to_no=&sitelinks_any=&cb_labels_yes_l=1&after=&depth=0&templates_yes=&wikidata_prop_item_use=&edits%5Bbots%5D=both&links_to_any=&manual_list_wiki=&interface_language=en&since_rev0=&subpage_filter=either&templates_any=&ns%5B0%5D=1&active_tab=tab_templates_n_links&templates_no=&project=wikipedia&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FHistory&language=en&sortby=none&min_sitelink_count=&max_age=&min_redlink_count=1&edits%5Banons%5D=both&doit=) |
| Geography | Yes | [Link](https://petscan.wmflabs.org/?common_wiki=auto&manual_list=&edits%5Banons%5D=both&outlinks_yes=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FGeography&sortorder=ascending&minlinks=&langs_labels_yes=&combination=subset&ores_prob_from=&negcats=&wikidata_label_language=&cb_labels_any_l=1&ns%5B0%5D=1&search_filter=&categories=&search_wiki=&format=html&sitelinks_any=&max_age=&labels_yes=&output_limit=&wikidata_item=no&cb_labels_yes_l=1&search_max_results=500&maxlinks=&cb_labels_no_l=1&active_tab=tab_templates_n_links&links_to_any=&manual_list_wiki=&after=&language=en&page_image=any&pagepile=&depth=0&max_sitelink_count=®exp_filter=&referrer_name=&interface_language=en&project=wikipedia&output_compatability=catscan&wikidata_source_sites=&doit=) |
| Arts | Yes | [Link](https://petscan.wmflabs.org/?cb_labels_any_l=1&outlinks_yes=&depth=0&language=en&edits%5Bflagged%5D=both&referrer_name=&source_combination=&cb_labels_yes_l=1&search_max_results=500&manual_list=&ores_prob_from=&min_redlink_count=1&sitelinks_no=&sitelinks_yes=&cb_labels_no_l=1&sparql=&wikidata_label_language=&wikidata_item=no&links_to_all=&maxlinks=&outlinks_no=&wikidata_prop_item_use=&templates_yes=&project=wikipedia&active_tab=tab_templates_n_links&combination=subset&after=&wpiu=any&langs_labels_no=&interface_language=en&ores_type=any&links_to_any=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FArts&larger=&wikidata_source_sites=&manual_list_wiki=&edits%5Bbots%5D=both&ns%5B0%5D=1&page_image=any&min_sitelink_count=&ores_prediction=any&pagepile=&doit=) |
| Philosophy and religion | Yes | [Link](https://petscan.wmflabs.org/?wpiu=any&edits%5Bbots%5D=both&langs_labels_yes=&sitelinks_no=&max_sitelink_count=&sortorder=ascending&before=&outlinks_no=&language=en&labels_yes=&show_disambiguation_pages=both®exp_filter=&show_redirects=both&project=wikipedia&depth=0&ores_prob_from=&page_image=any&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPhilosophy_and_religion&min_sitelink_count=&labels_any=&edits%5Banons%5D=both&search_max_results=500&cb_labels_yes_l=1&cb_labels_no_l=1&ns%5B0%5D=1&sparql=&manual_list=&cb_labels_any_l=1&interface_language=en&sitelinks_any=&active_tab=tab_templates_n_links&wikidata_source_sites=&links_to_any=&templates_no=&links_to_all=&links_to_no=&ores_prediction=any&categories=&manual_list_wiki=&common_wiki=auto&doit=) |
| Everyday life | Yes | [Link](https://petscan.wmflabs.org/?templates_any=&edits%5Bbots%5D=both&templates_no=&maxlinks=&wikidata_label_language=&cb_labels_any_l=1&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FEveryday_life&edits%5Banons%5D=both&links_to_all=&ns%5B0%5D=1&larger=&search_filter=&language=en&show_disambiguation_pages=both&sitelinks_any=&langs_labels_any=&cb_labels_yes_l=1&cb_labels_no_l=1&wpiu=any&after=&project=wikipedia&sparql=&output_limit=&manual_list=&since_rev0=&langs_labels_no=&edits%5Bflagged%5D=both&wikidata_source_sites=&sitelinks_yes=&before=&combination=subset&sortorder=ascending&ores_type=any&min_redlink_count=1&referrer_url=&search_max_results=500&active_tab=tab_templates_n_links&wikidata_item=no&categories=&sortby=none&interface_language=en&doit=) |
| Society and social sciences | Yes | [Link](https://petscan.wmflabs.org/?before=&labels_yes=&output_limit=&labels_no=&active_tab=tab_templates_n_links&outlinks_yes=&templates_yes=&larger=&ores_prob_to=&search_max_results=500&cb_labels_no_l=1&max_age=&templates_any=&common_wiki=auto&max_sitelink_count=&minlinks=&cb_labels_any_l=1&show_soft_redirects=both&langs_labels_any=&interface_language=en&search_wiki=&links_to_any=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FSociety_and_social_sciences&min_sitelink_count=&ores_prob_from=&search_filter=&ns%5B0%5D=1&common_wiki_other=&sitelinks_no=&sitelinks_any=&labels_any=&wikidata_item=no&wikidata_prop_item_use=&wikidata_label_language=&links_to_all=&language=en&output_compatability=catscan&categories=&cb_labels_yes_l=1&project=wikipedia&smaller=&doit=) |
| Biological and health sciences | Yes | [Link](https://petscan.wmflabs.org/?common_wiki=auto&links_to_all=&common_wiki_other=&search_filter=&format=html&project=wikipedia&negcats=&interface_language=en&labels_yes=&templates_no=&show_disambiguation_pages=both&pagepile=&cb_labels_no_l=1&search_max_results=500&sitelinks_any=&wikidata_label_language=&templates_yes=&max_age=&page_image=any&cb_labels_yes_l=1&manual_list=&language=en&larger=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FBiology_and_health_sciences&source_combination=&cb_labels_any_l=1&min_redlink_count=1&active_tab=tab_templates_n_links&show_redirects=both&show_soft_redirects=both&ores_type=any&search_wiki=&max_sitelink_count=&labels_any=®exp_filter=&manual_list_wiki=&ores_prob_to=&ns%5B0%5D=1&edits%5Banons%5D=both&links_to_no=&links_to_any=&wikidata_prop_item_use=&doit=) |
| Physical sciences | Yes | [Link](https://petscan.wmflabs.org/?page_image=any&cb_labels_yes_l=1&sortby=none&interface_language=en&language=en&outlinks_yes=&cb_labels_any_l=1&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPhysical_sciences&active_tab=tab_templates_n_links&sitelinks_no=&project=wikipedia&categories=&edits%5Bbots%5D=both&labels_any=&search_wiki=&cb_labels_no_l=1&show_soft_redirects=both&wikidata_item=no&depth=0&ores_prediction=any&search_query=&wikidata_label_language=&smaller=&langs_labels_yes=&edits%5Banons%5D=both&namespace_conversion=keep&show_disambiguation_pages=both&search_max_results=500&wikidata_prop_item_use=&wpiu=any&sitelinks_yes=&common_wiki=auto&show_redirects=both&langs_labels_any=&ns%5B0%5D=1&templates_any=&format=html&ores_prob_from=&min_redlink_count=1&output_compatability=catscan&ores_type=any&max_sitelink_count=&doit=) |
| Technology | Yes | [Link](https://petscan.wmflabs.org/?output_limit=&since_rev0=&categories=&labels_no=&manual_list=&labels_yes=&max_age=&langs_labels_any=&referrer_name=&search_max_results=500&outlinks_no=&cb_labels_yes_l=1&edits%5Bbots%5D=both&language=en&combination=subset&wikidata_source_sites=&langs_labels_no=&referrer_url=&cb_labels_any_l=1&interface_language=en&templates_any=&ores_prob_to=&search_wiki=&show_redirects=both&ns%5B0%5D=1&sitelinks_yes=&sitelinks_no=®exp_filter=&edits%5Banons%5D=both&active_tab=tab_templates_n_links&project=wikipedia&depth=0&negcats=&after=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FTechnology&smaller=&show_disambiguation_pages=both&subpage_filter=either&cb_labels_no_l=1&outlinks_yes=&doit=) |
| Mathematics | Yes | [Link](https://petscan.wmflabs.org/?project=wikipedia&links_to_no=&max_age=&minlinks=&combination=subset&search_max_results=500&labels_no=&sortby=none&interface_language=en&active_tab=tab_templates_n_links&wpiu=any&larger=&wikidata_prop_item_use=&since_rev0=&cb_labels_no_l=1&ns%5B0%5D=1&common_wiki=auto&labels_any=&cb_labels_any_l=1&sortorder=ascending&show_disambiguation_pages=both&show_soft_redirects=both&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FMathematics&manual_list_wiki=&wikidata_label_language=&negcats=&links_to_all=&maxlinks=&after=&cb_labels_yes_l=1&edits%5Bbots%5D=both&output_limit=&langs_labels_any=&edits%5Banons%5D=both&referrer_url=&sitelinks_any=&ores_prob_to=&subpage_filter=either&output_compatability=catscan&ores_prob_from=&language=en&edits%5Bflagged%5D=both&doit=) |
| Engineering | **No** | [Link](https://petscan.wmflabs.org/?format=html®exp_filter=&since_rev0=&templates_no=&search_filter=&langs_labels_any=&min_sitelink_count=&outlinks_any=&wikidata_label_language=&show_redirects=both&links_to_no=&referrer_name=&min_redlink_count=1&langs_labels_no=&source_combination=&cb_labels_any_l=1&referrer_url=&sitelinks_yes=&ores_type=any&cb_labels_yes_l=1&cb_labels_no_l=1&ores_prob_from=&project=wikipedia&search_max_results=500&common_wiki_other=&wikidata_item=no&categories=Engineering&output_limit=&depth=2&manual_list=&interface_language=en&minlinks=&namespace_conversion=keep&subpage_filter=either&manual_list_wiki=&links_to_all=&edits%5Banons%5D=both&ns%5B0%5D=1&language=en&edits%5Bflagged%5D=both&doit=) |
| Electronics | **No** | [Link](https://petscan.wmflabs.org/?links_to_any=&language=en&labels_no=&outlinks_yes=&categories=Electronics&sortorder=ascending&combination=subset&links_to_no=&labels_any=&ns%5B0%5D=1&edits%5Bbots%5D=both&outlinks_no=&format=html&templates_yes=&wikidata_prop_item_use=&cb_labels_no_l=1&langs_labels_no=&active_tab=tab_categories&ores_type=any&templates_no=&common_wiki=auto&source_combination=&search_max_results=500&ores_prediction=any&show_disambiguation_pages=both&cb_labels_any_l=1&min_redlink_count=1&project=wikipedia&referrer_name=&after=&show_redirects=both&langs_labels_any=&depth=1&cb_labels_yes_l=1&interface_language=en&ores_prob_to=&negcats=&wikidata_item=no&max_age=&langs_labels_yes=&edits%5Bflagged%5D=both&doit=) |
### Post-Processing
A small amount of post-processing was done to the models outputs. Here is a list of all modifications made:
- Strip leading and trailing whitespace (automatic)
- Filter generated questions for the following words: ["question", "expert", "sorry", "opinion"]. This was done to filter out rare instances where the model responded in way which contained stuff other than just the question, or didn't generate a question.
- Manually fixing some stray tokens (`'s` was sometimes `'S` or `'t`, years sometimes had a random character inserted in them, and other rare cases of tokens which didn't make since, even if the rest of the question was good)
|
Coriolan/smart-contract-vulnerabilities | 2023-10-03T22:41:10.000Z | [
"license:mit",
"region:us"
] | Coriolan | null | null | null | 0 | 19 | ---
license: mit
---
|
hieudinhpro/diffuision-dataset2 | 2023-10-05T16:32:39.000Z | [
"region:us"
] | hieudinhpro | null | null | null | 0 | 19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 138363142.634
num_examples: 9999
download_size: 138145195
dataset_size: 138363142.634
---
# Dataset Card for "diffuision-dataset2"
Dataset copy from "zoheb/sketch-scene" |
CHOJW1004/kochatgpt_RM2 | 2023-10-04T07:45:31.000Z | [
"region:us"
] | CHOJW1004 | null | null | null | 0 | 19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 15758172.0
num_examples: 27594
- name: test
num_bytes: 1750908.0
num_examples: 3066
download_size: 9270108
dataset_size: 17509080.0
---
# Dataset Card for "kochatgpt_RM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
siddanshchawla/context_gen_data | 2023-10-08T21:58:21.000Z | [
"region:us"
] | siddanshchawla | null | null | null | 0 | 19 | Entry not found |
arsentd_lev | 2023-01-25T14:26:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:apc",
"language:ajp",
"lic... | null | The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria. | @article{ArSenTDLev2018,
title={ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets},
author={Baly, Ramy, and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Bashir Shaban, Khaled},
journal={OSACT3},
pages={},
year={2018}} | null | 3 | 18 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- apc
- ajp
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- topic-classification
paperswithcode_id: arsentd-lev
pretty_name: ArSenTD-LEV
dataset_info:
features:
- name: Tweet
dtype: string
- name: Country
dtype:
class_label:
names:
'0': jordan
'1': lebanon
'2': syria
'3': palestine
- name: Topic
dtype: string
- name: Sentiment
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
'3': very_negative
'4': very_positive
- name: Sentiment_Expression
dtype:
class_label:
names:
'0': explicit
'1': implicit
'2': none
- name: Sentiment_Target
dtype: string
splits:
- name: train
num_bytes: 1233980
num_examples: 4000
download_size: 392666
dataset_size: 1233980
---
# Dataset Card for ArSenTD-LEV
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ArSenTD-LEV homepage](http://oma-project.com/)
- **Paper:** [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830)
### Dataset Summary
The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.
### Supported Tasks and Leaderboards
Sentriment analysis
### Languages
Arabic Levantine Dualect
## Dataset Structure
### Data Instances
{'Country': 0,
'Sentiment': 3,
'Sentiment_Expression': 0,
'Sentiment_Target': 'هاي سوالف عصابات ارهابية',
'Topic': 'politics',
'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'}
### Data Fields
`Tweet`: the text content of the tweet \
`Country`: the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\
`Topic`: the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \
`Sentiment`: the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \
`Sentiment_Expression`: the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \
`Sentiment_Target`: the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value.
### Data Splits
No standard splits are provided
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Make sure to read and agree to the [license](http://oma-project.com/ArSenL/ArSenTD_Lev_Intro)
### Citation Information
```
@article{baly2019arsentd,
title={Arsentd-lev: A multi-topic corpus for target-based sentiment analysis in arabic levantine tweets},
author={Baly, Ramy and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Shaban, Khaled Bashir},
journal={arXiv preprint arXiv:1906.01830},
year={2019}
}
```
### Contributions
Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset. |
text2log | 2022-11-03T16:15:15.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda. | @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852}} | null | 2 | 18 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: text2log
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
dataset_info:
features:
- name: sentence
dtype: string
- name: fol_translation
dtype: string
splits:
- name: train
num_bytes: 10358134
num_examples: 101931
download_size: 9746473
dataset_size: 10358134
---
# Dataset Card for text2log
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/alevkov/text2log)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/alevkov
### Dataset Summary
The dataset contains 100,000 simple English sentences selected and filtered from `enTenTen15` and their translation into First Order Logic (FOL) using `ccg2lambda`.
### Supported Tasks and Leaderboards
'semantic-parsing': The data set is used to train models which can generate FOL statements from natural language text
### Languages
en-US
## Dataset Structure
### Data Instances
```
{
'clean':'All things that are new are good.',
'trans':'all x1.(_thing(x1) -> (_new(x1) -> _good(x1)))'
}
```
### Data Fields
- 'clean': a simple English sentence
- 'trans': the corresponding translation into Lambda Dependency-based Compositional Semantics
### Data Splits
No predefined train/test split is given. The authors used a 80/20 split
## Dataset Creation
### Curation Rationale
The text2log data set is used to improve FOL statement generation from natural text
### Source Data
#### Initial Data Collection and Normalization
Short text samples selected from enTenTen15
#### Who are the source language producers?
See https://www.sketchengine.eu/ententen-english-corpus/
### Annotations
#### Annotation process
Machine generated using https://github.com/mynlp/ccg2lambda
#### Who are the annotators?
none
### Personal and Sensitive Information
The dataset does not contain personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
None given
### Citation Information
```bibtex
@INPROCEEDINGS{9401852,
author={Levkovskyi, Oleksii and Li, Wei},
booktitle={SoutheastCon 2021},
title={Generating Predicate Logic Expressions from Natural Language},
year={2021},
volume={},
number={},
pages={1-8},
doi={10.1109/SoutheastCon45413.2021.9401852}
}
```
### Contributions
Thanks to [@apergo-ai](https://github.com/apergo-ai) for adding this dataset. |
tweets_ar_en_parallel | 2023-01-25T14:54:55.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"language:en",
"license:apache-2.0",
"tweets-translation... | null | Twitter users often post parallel tweets—tweets that contain the same content but are
written in different languages. Parallel tweets can be an important resource for developing
machine translation (MT) systems among other natural language processing (NLP) tasks. This
resource is a result of a generic method for collecting parallel tweets. Using the method,
we compiled a bilingual corpus of English-Arabic parallel tweets and a list of Twitter accounts
who post English-Arabic tweets regularly. Additionally, we annotate a subset of Twitter accounts
with their countries of origin and topic of interest, which provides insights about the population
who post parallel tweets. | @inproceedings{Mubarak2020bilingualtweets,
title={Constructing a Bilingual Corpus of Parallel Tweets},
author={Mubarak, Hamdy and Hassan, Sabit and Abdelali, Ahmed},
booktitle={Proceedings of 13th Workshop on Building and Using Comparable Corpora (BUCC)},
address={Marseille, France},
year={2020}
} | null | 3 | 18 | ---
annotations_creators:
- expert-generated
- no-annotation
language_creators:
- found
language:
- ar
- en
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: bilingual-corpus-of-arabic-english-parallel
pretty_name: Bilingual Corpus of Arabic-English Parallel Tweets
tags:
- tweets-translation
dataset_info:
- config_name: parallelTweets
features:
- name: ArabicTweetID
dtype: int64
- name: EnglishTweetID
dtype: int64
splits:
- name: test
num_bytes: 2667296
num_examples: 166706
download_size: 2937626
dataset_size: 2667296
- config_name: accountList
features:
- name: account
dtype: string
splits:
- name: test
num_bytes: 20108
num_examples: 1389
download_size: 2937626
dataset_size: 20108
- config_name: countryTopicAnnotation
features:
- name: account
dtype: string
- name: country
dtype:
class_label:
names:
'0': QA
'1': BH
'2': AE
'3': OM
'4': SA
'5': PL
'6': JO
'7': IQ
'8': Other
'9': EG
'10': KW
'11': SY
- name: topic
dtype:
class_label:
names:
'0': Gov
'1': Culture
'2': Education
'3': Sports
'4': Travel
'5': Events
'6': Business
'7': Science
'8': Politics
'9': Health
'10': Governoment
'11': Media
splits:
- name: test
num_bytes: 6036
num_examples: 200
download_size: 2937626
dataset_size: 6036
---
# Dataset Card for Bilingual Corpus of Arabic-English Parallel Tweets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Bilingual Corpus of Arabic-English Parallel Tweets](https://alt.qcri.org/resources/bilingual_corpus_of_parallel_tweets)
- **Repository:**
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.bucc-1.3/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Twitter users often post parallel tweets—tweets that contain the same content but are written in different languages. Parallel tweets can be an important resource for developing machine translation (MT) systems among other natural language processing (NLP) tasks. This resource is a result of a generic method for collecting parallel tweets. Using the method, we compiled a bilingual corpus of English-Arabic parallel tweets and a list of Twitter accounts who post English-Arabic tweets regularly. Additionally, we annotate a subset of Twitter accounts with their countries of origin and topic of interest, which provides insights about the population who post parallel tweets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
parallelTweets:
```
{
"ArabicTweetID": 981111245209243600,
"EnglishTweetID": 981111450432401400
}
```
accountList:
```
{
'account': 'HukoomiQatar'
}
```
countryTopicAnnotation:
```
{
'account': 'HukoomiQatar',
'country': 'QA',
'topic': 'Gov'
}
```
### Data Fields
parallelTweets:
- `ArabicTweetID` (int)
- `EnglishTweetID` (int)
accountList:
- `account` (str)
countryTopicAnnotation:
- `account` (str)
- `country` (class label): One of:
- "QA",
- "BH",
- "AE",
- "OM",
- "SA",
- "PL",
- "JO",
- "IQ",
- "Other",
- "EG",
- "KW",
- "SY"
- `topic` (class label): One of:
- "Gov",
- "Culture",
- "Education",
- "Sports",
- "Travel",
- "Events",
- "Business",
- "Science",
- "Politics",
- "Health",
- "Governoment",
- "Media",
### Data Splits
All configuration have only one split: "test".
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
It is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{Mubarak2020bilingualtweets,
title={Constructing a Bilingual Corpus of Parallel Tweets},
author={Mubarak, Hamdy and Hassan, Sabit and Abdelali, Ahmed},
booktitle={Proceedings of 13th Workshop on Building and Using Comparable Corpora (BUCC)},
address={Marseille, France},
year={2020}
}
```
[More Information Needed]
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
wikitext_tl39 | 2022-11-03T16:15:46.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:fil",... | null | Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Originally published in Cruz & Cheng (2019). | @article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
} | null | 0 | 18 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- fil
- tl
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: wikitext-tl-39
pretty_name: WikiText-TL-39
dataset_info:
features:
- name: text
dtype: string
config_name: wikitext-tl-39
splits:
- name: test
num_bytes: 46182996
num_examples: 376737
- name: train
num_bytes: 217182748
num_examples: 1766072
- name: validation
num_bytes: 46256674
num_examples: 381763
download_size: 116335234
dataset_size: 309622418
---
# Dataset Card for WikiText-TL-39
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:**
- **Paper:** [Evaluating language model finetuning techniques for low-resource languages](https://arxiv.org/abs/1907.00409)
- **Leaderboard:**
- **Point of Contact:** Jan Christian Blaise Cruz (jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Published in Cruz & Cheng (2019).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Filipino/Tagalog
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `text` (`str`)
The dataset is in plaintext and only has one field ("text") as it is compiled for language modeling.
### Data Splits
Split | Documents | Tokens
------|-----------|-------
Train | 120,975 | 39M
Valid | 25,919 | 8M
Test | 25,921 | 8M
Please see the paper for more details on the dataset splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Tagalog Wikipedia
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset. |
DDSC/dkhate | 2023-05-17T06:19:43.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"arxiv:1908.04531",
"region:us"
... | DDSC | null | null | null | 4 | 18 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: DKHate
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
extra_gated_prompt: "Content warning: This dataset contains harmful text (abusive language, hate speech)."
paperswithcode_id: dkhate
---
# Dataset Card for DKHate
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/offensivelanguageandhatespeechdetectionfordanish/](https://stromberg.ai/publication/offensivelanguageandhatespeechdetectionfordanish/)
- **Repository:** [https://github.com/StrombergNLP/dkhate](https://github.com/StrombergNLP/dkhate)
- **Paper:** [https://https://aclanthology.org/2020.lrec-1.430/](aclanthology.org/2020.lrec-1.430/), [https://arxiv.org/abs/1908.04531](https://arxiv.org/abs/1908.04531)
- **Direct Download**: [https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805](https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805)
- **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
### Dataset Summary
This dataset consists of anonymised Danish Twitter data that has been annotated for hate speech. All credits go to the authors of the following paper, who created the dataset:
[Offensive Language and Hate Speech Detection for Danish](https://aclanthology.org/2020.lrec-1.430) (Sigurbergsson & Derczynski, LREC 2020)
### Supported Tasks and Leaderboards
This dataset is suitable for hate speech detection.
* PwC leaderboard for Task A: [Hate Speech Detection on DKhate](https://paperswithcode.com/sota/hate-speech-detection-on-dkhate)
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset has a tweet and an associated label.
### Data Fields
An entry in the dataset consists of the following fields:
- `text` (`str`): The tweet content.
- `label` (`str`): The label of the `text`. Can be either "OFF" or "NOT", being offensive and not offensive, respectively.
### Data Splits
A `train` and `test` split is available, which are identical to the original splits. There are 2,960 tweets in the training split and 329 in the test split.
## Additional Information
### Dataset Curators
The curation of the dataset is solely due to the authors of [the original paper](https://aclanthology.org/2020.lrec-1.430/): Gudbjartur Ingi Sigurbergsson and Leon Derczynski.
### Licensing Information
The dataset is released under the CC BY 4.0 license.
### Citation Information
```
@inproceedings{sigurbergsson2020offensive,
title={Offensive Language and Hate Speech Detection for Danish},
author={Sigurbergsson, Gudbjartur Ingi and Derczynski, Leon},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={3498--3508},
year={2020}
}
```
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. |
DELith/github-issues | 2021-11-21T15:58:45.000Z | [
"region:us"
] | DELith | null | null | null | 0 | 18 | Entry not found |
Sakonii/nepalitext-language-model-dataset | 2022-10-25T06:14:22.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:extended|oscar",
"source_datasets:extended|cc100",
"language:ne",
"license:cc0-1.0",
"regio... | Sakonii | null | null | null | 3 | 18 | ---
annotations_creators:
- no-annotation
language_creators:
- found
- other
language:
- ne
license:
- cc0-1.0
multilinguality:
- monolingual
source_datasets:
- extended|oscar
- extended|cc100
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: nepalitext-language-model-dataset
---
# Dataset Card for "nepalitext-language-model-dataset"
### Dataset Summary
"NepaliText" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
### Supported Tasks and Leaderboards
This dataset is intended to pre-train language models and word representations on Nepali Language.
### Languages
The data is focused on Nepali language, but may have instances of other languages as well.
## Dataset Structure
### Data Instances
An example:
```
{'text': 'घरेलु मैदानमा भएको च्याम्पियन्स लिगको दोस्रो लेगमा एथ्लेटिको मड्रिडले आर्सनललाई एक शून्यले हराउँदै समग्रमा दुई एकको अग्रताका साथ फाइनलमा प्रवेश गरेको हो ।\n'}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
### Data Splits
train|test|
----:|---:|
13141222|268189|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Being extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@Sakonii](https://github.com/Sakonii) for adding this dataset. |
SetFit/hate_speech_offensive | 2022-01-15T21:47:31.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 18 | # hate_speech_offensive
This dataset is a version from [hate_speech_offensive](https://huggingface.co/datasets/hate_speech_offensive), splitted into train and test set. |
jakeazcona/short-text-labeled-emotion-classification | 2021-12-05T18:38:57.000Z | [
"region:us"
] | jakeazcona | null | null | null | 3 | 18 | Entry not found |
jhonparra18/spanish_billion_words_clean | 2022-01-27T04:27:24.000Z | [
"region:us"
] | jhonparra18 | null | null | null | 4 | 18 | Entry not found |
nickmuchi/financial-classification | 2023-01-27T23:44:03.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"language:en",
"finance",
"region:us"
] | nickmuchi | null | null | null | 7 | 18 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
train-eval-index:
- config: sentences_50agree
- task: text-classification
- task_ids: multi_class_classification
- splits:
eval_split: train
- col_mapping:
sentence: text
label: target
size_categories:
- 1K<n<10K
tags:
- finance
---
## Dataset Creation
This [dataset](https://huggingface.co/datasets/nickmuchi/financial-classification) combines financial phrasebank dataset and a financial text dataset from [Kaggle](https://www.kaggle.com/datasets/percyzheng/sentiment-classification-selflabel-dataset).
Given the financial phrasebank dataset does not have a validation split, I thought this might help to validate finance models and also capture the impact of COVID on financial earnings with the more recent Kaggle dataset. |
qanastek/ELRC-Medical-V2 | 2022-10-24T17:15:17.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"language:bg",
"language:cs",
"language:da",
"lan... | qanastek | null | @inproceedings{losch-etal-2018-european,
title = "European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management",
author = {L{\"o}sch, Andrea and
Mapelli, Val{\'e}rie and
Piperidis, Stelios and
Vasi{\c{l}}jevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef},
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1213",
} | null | 7 | 18 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- bg
- cs
- da
- de
- el
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: ELRC-Medical-V2
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://live.european-language-grid.eu/catalogue/project/2209
- **Repository:** https://github.com/qanastek/ELRC-Medical-V2/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`ELRC-Medical-V2` is a parallel corpus for neural machine translation funded by the [European Commission](http://www.lr-coordination.eu/) and coordinated by the [German Research Center for Artificial Intelligence](https://www.dfki.de/web).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for 23 differents languages from the European Union (EU) with as source language in each cases english (EN).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Irish (ga)`,`Croatian (hr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
NAME = "qanastek/ELRC-Medical-V2"
dataset = load_dataset(NAME, use_auth_token=True)
print(dataset)
dataset_train = load_dataset(NAME, "en-es", split='train[:90%]')
dataset_test = load_dataset(NAME, "en-es", split='train[10%:]')
print(dataset_train)
print(dataset_train[0])
print(dataset_test)
```
## Dataset Structure
### Data Instances
```plain
id,lang,source_text,target_text
1,en-bg,"TOC \o ""1-3"" \h \z \u Introduction 3","TOC \o ""1-3"" \h \z \u Въведение 3"
2,en-bg,The international humanitarian law and its principles are often not respected.,Международното хуманитарно право и неговите принципи често не се зачитат.
3,en-bg,"At policy level, progress was made on several important initiatives.",На равнище политики напредък е постигнат по няколко важни инициативи.
```
### Data Fields
**id** : The document identifier of type `Integer`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| Lang | # Docs | Avg. # Source Tokens | Avg. # Target Tokens |
|--------|-----------|------------------------|------------------------|
| bg | 13 149 | 23 | 24 |
| cs | 13 160 | 23 | 21 |
| da | 13 242 | 23 | 22 |
| de | 13 291 | 23 | 22 |
| el | 13 091 | 23 | 26 |
| es | 13 195 | 23 | 28 |
| et | 13 016 | 23 | 17 |
| fi | 12 942 | 23 | 16 |
| fr | 13 149 | 23 | 28 |
| ga | 412 | 12 | 12 |
| hr | 12 836 | 23 | 21 |
| hu | 13 025 | 23 | 21 |
| it | 13 059 | 23 | 25 |
| lt | 12 580 | 23 | 18 |
| lv | 13 044 | 23 | 19 |
| mt | 3 093 | 16 | 14 |
| nl | 13 191 | 23 | 25 |
| pl | 12 761 | 23 | 22 |
| pt | 13 148 | 23 | 26 |
| ro | 13 163 | 23 | 25 |
| sk | 12 926 | 23 | 20 |
| sl | 13 208 | 23 | 21 |
| sv | 13 099 | 23 | 21 |
|||||
| Total | 277 780 | 22.21 | 21.47 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://elrc-share.eu/repository/search/?q=mfsp%3A87ef9e5e8ac411ea913100155d026706e19a1a9f908b463c944490c36ba2f454&page=3).
### Source Data
#### Initial Data Collection and Normalization
The acquisition of bilingual data (from multilingual websites), normalization, cleaning, deduplication and identification of parallel documents have been done by [ILSP-FC tool](http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/Introduction). [Maligna aligner](https://github.com/loomchild/maligna) was used for alignment of segments. Merging/filtering of segment pairs has also been applied.
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) on [ELRC-Share](https://elrc-share.eu/repository/browse/bilingual-corpus-from-the-publications-office-of-the-eu-on-the-medical-domain-v2-en-fr/6b31b32e8ac411ea913100155d0267061547d9b3ec284584af19a2953baa8937/).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__ELRC-Medical-V2__: Labrak Yanis, Dufour Richard
__Bilingual corpus from the Publications Office of the EU on the medical domain v.2 (EN-XX) Corpus__: [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) and [others](https://live.european-language-grid.eu/catalogue/project/2209).
### Licensing Information
<a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf"><img alt="Attribution 4.0 International (CC BY 4.0) License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf">Attribution 4.0 International (CC BY 4.0) License</a>.
### Citation Information
Please cite the following paper when using this model.
```latex
@inproceedings{losch-etal-2018-european,
title = European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,
author = {
L'osch, Andrea and
Mapelli, Valérie and
Piperidis, Stelios and
Vasiljevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef
},
booktitle = Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
month = may,
year = 2018,
address = Miyazaki, Japan,
publisher = European Language Resources Association (ELRA),
url = https://aclanthology.org/L18-1213,
}
```
|
sentence-transformers/msmarco-hard-negatives | 2022-08-18T16:04:34.000Z | [
"region:us"
] | sentence-transformers | null | null | null | 4 | 18 | # MS MARCO Passages Hard Negatives
[MS MARCO](https://microsoft.github.io/msmarco/) is a large scale information retrieval corpus that was created based on real user search queries using Bing search engine.
This dataset repository contains files that are helpful to train bi-encoder models e.g. using [sentence-transformers](https://www.sbert.net).
## Training Code
You can find here an example how these files can be used to train bi-encoders: [SBERT.net - MS MARCO - MarginMSE](https://www.sbert.net/examples/training/ms_marco/README.html#marginmse)
## cross-encoder-ms-marco-MiniLM-L-6-v2-scores.pkl.gz
This is a pickled dictionary in the format: `scores[qid][pid] -> cross_encoder_score`
It contains 160 million cross-encoder scores for (query, paragraph) pairs using the [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) model.
## msmarco-hard-negatives.jsonl.gz
This is a jsonl file: Each line is a JSON object. It has the following format:
```
{"qid": 867436, "pos": [5238393], "neg": {"bm25": [...], ...}}
```
`qid` is the query-ID from MS MARCO, `pos` is a list with paragraph IDs for positive passages. `neg` is a dictionary where we mined hard negatives using different (mainly dense retrieval) systems.
It contains hard negatives mined from BM25 (using ElasticSearch) and the following dense models:
```
msmarco-distilbert-base-tas-b
msmarco-distilbert-base-v3
msmarco-MiniLM-L-6-v3
distilbert-margin_mse-cls-dot-v2
distilbert-margin_mse-cls-dot-v1
distilbert-margin_mse-mean-dot-v1
mpnet-margin_mse-mean-v1
co-condenser-margin_mse-cls-v1
distilbert-margin_mse-mnrl-mean-v1
distilbert-margin_mse-sym_mnrl-mean-v1
distilbert-margin_mse-sym_mnrl-mean-v2
co-condenser-margin_mse-sym_mnrl-mean-v1
```
From each system, 50 most similar paragraphs were mined for a given query.
|
vesteinn/icelandic-qa-NQiI | 2022-07-04T16:32:26.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:curated",
"language_creators:curated",
"multilinguality:monolingual",
"source_datasets:original",
"language:is",
"license:cc-by-sa-4.0",
"region:us"
] | vesteinn | \ | \ | null | 2 | 18 | ---
pretty_name: NQiI
annotations_creators:
- curated
language_creators:
- curated
language:
- is
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: nqii
---
# Natural Questions in Icelandic
|
McGill-NLP/feedbackQA | 2023-06-14T17:27:23.000Z | [
"license:apache-2.0",
"arxiv:2204.03025",
"region:us"
] | McGill-NLP | FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users. It has two parts: the first part contains a conventional RQA dataset, whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs. | null | 4 | 18 | ---
license: apache-2.0
---
# Dataset Card for FeedbackQA
[📄 Read](https://arxiv.org/pdf/2204.03025.pdf)<br>
[💾 Code](https://github.com/McGill-NLP/feedbackqa)<br>
[🔗 Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br>
[💻 Demo](http://206.12.100.48:8080/)<br>
[🤗 Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br>
[💬 Discussions](https://github.com/McGill-NLP/feedbackqa/discussions)
## Dataset Description
- **Homepage: https://mcgill-nlp.github.io/feedbackqa-data/**
- **Repository: https://github.com/McGill-NLP/feedbackqa-data/**
- **Paper:**
- **Leaderboard:**
- **Tasks: Question Answering**
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
[McGill-NLP](https://github.com/McGill-NLP)
| |
hazal/electronic-radiology-phd-thesis-trR | 2022-08-10T11:13:34.000Z | [
"language:tr",
"region:us"
] | hazal | null | null | null | 2 | 18 | ---
language:
- tr
--- |
Matthijs/snacks-detection | 2022-04-12T14:26:04.000Z | [
"task_categories:object-detection",
"license:cc-by-4.0",
"region:us"
] | Matthijs | null | null | null | 0 | 18 | ---
pretty_name: Snacks (Detection)
task_categories:
- object-detection
- computer-vision
license: cc-by-4.0
---
# Dataset Card for Snacks (Detection)
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Included in the **data** folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations.
The columns in the CSV files are:
- `image_id`: the filename of the image without the .jpg extension
- `x_min, x_max, y_min, y_max`: normalized bounding box coordinates, i.e. in the range [0, 1]
- `class_name`: the class that belongs to the bounding box
- `folder`: the class that belongs to the image as a whole, which is also the name of the folder that contains the image
The class names are:
```nohighlight
apple
banana
cake
candy
carrot
cookie
doughnut
grape
hot dog
ice cream
juice
muffin
orange
pineapple
popcorn
pretzel
salad
strawberry
waffle
watermelon
```
**Note:** The image files are not part of this repo but [can be found here](https://huggingface.co/datasets/Matthijs/snacks).
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
aakanksha/udpos | 2022-04-27T19:21:57.000Z | [
"region:us"
] | aakanksha | Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers. | @inproceedings{nivre-etal-2020-universal,
title = "{U}niversal {D}ependencies v2: An Evergrowing Multilingual Treebank Collection",
author = "Nivre, Joakim and
de Marneffe, Marie-Catherine and
Ginter, Filip and
Haji{\v{c}}, Jan and
Manning, Christopher D. and
Pyysalo, Sampo and
Schuster, Sebastian and
Tyers, Francis and
Zeman, Daniel",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.497",
pages = "4034--4043",
abstract = "Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers. In this paper, we describe version 2 of the universal guidelines (UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of the currently available treebanks for 90 languages.",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 0 | 18 | POS tagging on the Universal Dependencies dataset
|
peandrew/conceptnet_en_nomalized | 2022-05-08T03:11:02.000Z | [
"region:us"
] | peandrew | null | null | null | 1 | 18 | This is the English part of the ConceptNet and we have removed the useless information. |
BeIR/quora | 2022-10-23T06:03:40.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 1 | 18 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
naver-clova-ix/synthdog-ja | 2022-07-22T06:43:47.000Z | [
"region:us"
] | naver-clova-ix | null | null | null | 1 | 18 | Entry not found |
BirdL/DallData | 2022-09-28T21:12:02.000Z | [
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | BirdL | null | null | null | 0 | 18 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Latent Space Mapping
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- unconditional-image-generation
task_ids: []
---
DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/)
(1)
```bibtext
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
``` |
truongpdd/vietnews-dataset | 2022-09-09T04:54:20.000Z | [
"region:us"
] | truongpdd | null | null | null | 0 | 18 | Entry not found |
ywchoi/pubmed_abstract_4 | 2022-09-13T01:04:18.000Z | [
"region:us"
] | ywchoi | null | null | null | 0 | 18 | Entry not found |
efederici/mt_nap_it | 2022-10-28T14:32:26.000Z | [
"task_categories:translation",
"size_categories:unknown",
"language:it",
"license:unknown",
"conditional-text-generation",
"region:us"
] | efederici | null | null | null | 1 | 18 | ---
language:
- it
license:
- unknown
size_categories:
- unknown
task_categories:
- translation
task_ids: []
pretty_name: mt_nap_it
tags:
- conditional-text-generation
---
# Dataset Card for mt_en_it
## Table of Contents
- [Dataset Card for mt_en_it](#dataset-card-for-mt-en-it)
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
### Dataset Summary
This dataset comprises traditional Neapolitan songs from [napoligrafia](https://www.napoligrafia.it) translated into Italian.
### Languages
- italian-to-neapolitan
### Data Instances
A sample from the dataset.
```python
{
'url': "url",
'napoletano': "o, quacche ghiuorno, 'a frennesia mme piglia",
'italiano': "o, qualche giorno, la rabbia mi prende"
}
```
The text is provided without further preprocessing or tokenization.
### Data Fields
- `url`: source URL.
- `napoletano`: Neapolitan text.
- `italiano`: Italian text.
### Dataset Creation
The dataset was created by scraping [napoligrafia](https://www.napoligrafia.it) songs. |
michaljunczyk/pl-asr-bigos | 2023-09-23T15:17:04.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:other",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:other",
"mu... | michaljunczyk | BIGOS (Benchmark Intended Grouping of Open Speech) dataset goal is to simplify access to the openly available Polish speech corpora and
enable systematic benchmarking of open and commercial Polish ASR systems. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 0 | 18 | ---
annotations_creators:
- crowdsourced
- expert-generated
- other
- machine-generated
language:
- pl
language_creators:
- crowdsourced
- expert-generated
- other
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: pl-asr-bigos
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|librispeech_asr
- extended|common_voice
tags:
- benchmark
- polish
- asr
- speech
task_categories:
- automatic-speech-recognition
task_ids: []
extra_gated_prompt: |-
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).
* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)
* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)
* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)
* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)
To use selected dataset, you also need to fill in the access forms on the specific datasets pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0
extra_gated_fields:
I hereby confirm that I have read and accepted the license terms of datasets comprising BIGOS corpora: checkbox
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
---
# Dataset Card for Polish ASR BIGOS corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://huggingface.co/datasets/michaljunczyk/pl-asr-bigos**
- **Repository: https://github.com/goodmike31/pl-asr-bigos-tools**
- **Paper:https://www.researchgate.net/publication/370983845_BIGOS_-_Benchmark_Intended_Grouping_of_Open_Speech_Corpora_for_Polish_Automatic_Speech_Recognition**
- **Leaderboard:https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark/**
- **Point of Contact:michal.junczyk@amu.edu.pl**
### Dataset Summary
The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>
The initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets.
### Supported Tasks and Leaderboards
The leaderboard with benchmark of publicly available ASR systems supporting Polish is [under construction](https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark/).<br>
Evaluation results of 3 commercial and 5 freely available can be found in the [paper](https://www.researchgate.net/publication/370983845_BIGOS_-_Benchmark_Intended_Grouping_of_Open_Speech_Corpora_for_Polish_Automatic_Speech_Recognition).
### Languages
Polish
## Dataset Structure
Dataset consists audio recordings in WAV format and corresponding metadata.<br>
Audio and metadata can be used in raw format (TSV) or via hugging face datasets library.
### Data Instances
1900 audio files with original transcriptions are available in "test" split.<br>
This consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release.
### Data Fields
Available fields:
* file_id - file identifier
* dataset_id - source dataset identifier
* audio - binary representation of audio file
* ref_original - original transcription of audio file
* hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system
* hyp_google_default - ASR hypothesis (output) from Google ASR system, default model
* hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model
* hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model
* hyp_whisper_base - ASR hypothesis (output) from Whisper base model
* hyp_whisper_small - ASR hypothesis (output) from Whisper small model
* hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model
* hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model
<br><br>
Fields to be added in the next release:
* ref_spoken - manual transcription in a spoken format (without normalization)
* ref_written - manual transcription in a written format (with normalization)
### Data Splits
Initial release contains only "test" split.<br>
"Dev" and "train" splits will be added in the next release.
## Dataset Creation
### Curation Rationale
[Polish ASR Speech Data Catalog](https://github.com/goodmike31/pl-asr-speech-data-survey) was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>
The following mandatory criteria were considered:
* Dataset must be downloadable.
* The license must allow for free, noncommercial use.
* Transcriptions must be available and align with the recordings.
* The sampling rate of audio recordings must be at least 8 kHz.
* Audio encoding using a minimum of 16 bits per sample.
### Source Data
10 datasets that meet the criteria were chosen as sources for the BIGOS dataset.
* The Common Voice dataset (mozilla-common-voice-19)
* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)
* The Clarin Studio Corpus (clarin-pjatk-studio-15)
* The Clarin Mobile Corpus (clarin-pjatk-mobile-15)
* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info [here](https://www.ii.pwr.edu.pl/)
* The Munich-AI Labs Speech corpus (mailabs-19)
* The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info [here](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy)
#### Initial Data Collection and Normalization
Source text and audio files were extracted and encoded in a unified format.<br>
Dataset-specific transcription norms are preserved, including punctuation and casing. <br>
To strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br>
The only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br>
#### Who are the source language producers?
1. Clarin corpora - Polish Japanese Academy of Technology
2. Common Voice - Mozilla foundation
3. Multlingual librispeech - Facebook AI research lab
4. Jerzy Sas and AZON datasets - Politechnika Wrocławska
Please refer to the [paper](https://www.researchgate.net/publication/370983845_BIGOS_-_Benchmark_Intended_Grouping_of_Open_Speech_Corpora_for_Polish_Automatic_Speech_Recognition) for more details.
### Annotations
#### Annotation process
Current release contains original transcriptions.
Manual transcriptions are planned for subsequent releases.
#### Who are the annotators?
Depends on the source dataset.
### Personal and Sensitive Information
This corpus does not contain PII or Sensitive Information.
All IDs pf speakers are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
To be updated.
### Discussion of Biases
To be updated.
### Other Known Limitations
The dataset in the initial release contains only a subset of recordings from original datasets.
## Additional Information
### Dataset Curators
Original authors of the source datasets - please refer to [source-data](#source-data) for details.
Michał Junczyk (michal.junczyk@amu.edu.pl) - curator of BIGOS corpora.
### Licensing Information
The BIGOS corpora is available under [Creative Commons By Attribution Share Alike 4.0 license.](https://creativecommons.org/licenses/by-sa/4.0/)
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).
* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)
* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)
* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)
* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)
### Citation Information
TODO
### Contributions
Thanks to [@goodmike31](https://github.com/goodmike31) for adding this dataset. |
Nadav/MiniScans | 2022-11-15T14:15:58.000Z | [
"region:us"
] | Nadav | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: evaluation
1: train
splits:
- name: test
num_bytes: 1655444336.229
num_examples: 15159
- name: train
num_bytes: 34770710847.12
num_examples: 300780
download_size: 38233031644
dataset_size: 36426155183.349
---
# Dataset Card for "MiniScans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thennal/msc | 2022-12-08T06:49:31.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:ml",
"license:cc-by-sa-4.0",
"region:us"
] | thennal | null | null | null | 1 | 18 | ---
annotations_creators:
- crowdsourced
language:
- ml
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Swathanthra Malayalam Computing Malayalam Speech Corpus
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
dataset_info:
features:
- name: speechid
dtype: string
- name: speaker_id
dtype: string
- name: review_score
dtype: int64
- name: transcript
dtype: string
- name: category
dtype: string
- name: speaker_gender
dtype: string
- name: speaker_age
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
splits:
- name: train
num_bytes: 581998721.306
num_examples: 1541
download_size: 422643542
dataset_size: 581998721.306
---
# SMC Malayalam Speech Corpus
Malayalam Speech Corpus (MSC) is a repository of curated speech samples collected using MSC web application, released by Swathanthra Malayalam Computing.
The official blog post and source data can be found at [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/).
## Dataset Description
- **Homepage:** [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/)
### Dataset Summary
The first version of Malayalam Speech Corpus contains 1541 speech samples from 75 contributors amounting to 1:38:16 hours of speech. It has 482 unique sentences, 1400 unique words, 553 unique syllables and 48 unique phonemes.
|
rcds/swiss_legislation | 2023-07-20T07:36:07.000Z | [
"task_categories:text-classification",
"task_categories:translation",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | rcds | This dataset contains Swiss law articles | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 5 | 18 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- translation
language:
- de
- fr
- it
pretty_name: Swiss Legislation
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Legislation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Legislation is a multilingual, diachronic dataset of 36K Swiss laws. This dataset is part of a challenging Information Retreival task.
### Supported Tasks and Leaderboards
### Languages
The total number of texts in the dataset is 35,698. The dataset is saved in _lexfind_v2.jsonl_ format.
Switzerland has four official languages German, French, Italian and Romanch with some additional English laws being represenated. Laws are written by legal experts.
36K & 18K & 11K & 6K & 534 & 207
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 18K |
| French | **fr** | 11K |
| Italian | **it** | 6K |
| Romanch | **rm** | 534 |
| English | **en** | 207 |
## Dataset Structure
### Data Fields
Each entry in the dataset is a dictionary with the following keys:
- `canton`: the canton of origin of the legislation
- example: "ag"
- `language`: the language of the legislation
- example: "de"
- `uuid`: a unique identifier for the legislation
- example: "ec312f57-05fe-4552-ba50-8c9c269e0f3b"
- `title`: the title of the legislation
- example: "Gesetz über die Geoinformation im Kanton Aargau"
- `short`: a short description of the legislation
- example: "Kantonales Geoinformationsgesetz"
- `abbreviation`: an abbreviation for the legislation
- example: "KGeoIG"
- `sr_number`: a reference number for the legislation
- example: "740.100"
- `is_active`: whether the legislation is currently in force
- example: true
- `version_active_since`: the date since when the legislation's current version is active
- example: "2021-09-01"
- `family_active_since`: the date since when the legislation's current version's family is active
- example: "2011-05-24"
- `version_inactive_since`: the date since when the legislation's current version is inactive
- example: null
- `version_found_at`: the date the legislation's current version was found
- example: "2021-09-01"
- `pdf_url`: a link to the legislation's pdf
- example: "https://www.lexfind.ch/tol/1557/de"
- `html_url`: a link to the legislation's html
- example: "https://gesetzessammlungen.ag.ch/app/de/texts_of_law/740.100")_
- `pdf_content`: the legislation's pdf content
- example: "740.100 - Gesetz über..."
- `html_content`: the legislation's html content
- example: ""
- `changes`: a list of changes made to the legislation
- example: []
- `history`: a list of the legislation's history
- example: []
- `quotes`: a list of quotes from the legislation
- example: []
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
1. 'ch': Switzerland (Federal) - 15840
2. 'fr': Fribourg - 1633
3. 'be': Bern - 1344
4. 'vs': Valais - 1328
5. 'gr': Graubünden - 1205
6. 'ne': Neuchâtel - 1115
7. 'zh': Zurich - 974
8. 'bs': Basel-Stadt - 899
9. 'bl': Basel-Landschaft - 863
10. 'vd': Vaud - 870
11. 'ge': Geneva - 837
12. 'sg': St. Gallen - 764
13. 'ju': Jura - 804
14. 'zg': Zug - 632
15. 'ti': Ticino - 627
16. 'lu': Lucerne - 584
17. 'so': Solothurn - 547
18. 'ow': Obwalden - 513
19. 'ik': Interkantonal - 510
20. 'sh': Schaffhausen - 469
21. 'gl': Glarus - 467
22. 'tg': Thurgau - 453
23. 'sz': Schwyz - 423
24. 'ai': Appenzell Innerrhoden - 416
25. 'ag': Aargau - 483
26. 'ar': Appenzell Ausserrhoden - 330
27. 'nw': Nidwalden - 401
28. 'ur': Uri - 367
29.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Nadav/CaribbeanScans | 2023-02-21T01:28:57.000Z | [
"region:us"
] | Nadav | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': evaluation
'1': train
splits:
- name: train
num_bytes: 152948913099.784
num_examples: 1675172
- name: test
num_bytes: 9056919525.81
num_examples: 87721
download_size: 57344797328
dataset_size: 162005832625.594
---
# Dataset Card for "CaribbeanScans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kastan/rlhf-qa-comparisons | 2023-02-27T19:31:09.000Z | [
"region:us"
] | kastan | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Chosen
dtype: string
- name: Rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 172575
num_examples: 337
download_size: 58298
dataset_size: 172575
---
# Dataset Card for "rlhf-qa-comparisons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
boragokbakan/entity_disambiguation | 2023-03-10T19:29:56.000Z | [
"task_categories:question-answering",
"language:en",
"license:afl-3.0",
"entity disambiguation",
"disambiguation",
"ned",
"GENRE",
"BLINK",
"region:us"
] | boragokbakan | null | @inproceedings{decao2021autoregressive,
author = {Nicola {De Cao} and
Gautier Izacard and
Sebastian Riedel and
Fabio Petroni},
title = {Autoregressive Entity Retrieval},
booktitle = {9th International Conference on Learning Representations, {ICLR} 2021,
Virtual Event, Austria, May 3-7, 2021},
publisher = {OpenReview.net},
year = {2021},
url = {https://openreview.net/forum?id=5k8F6UU39V},
} | null | 2 | 18 | ---
license: afl-3.0
language:
- en
tags:
- entity disambiguation
- disambiguation
- ned
- GENRE
- BLINK
pretty_name: Entity Disambiguation
task_categories:
- question-answering
---
Entity Disambiguation datasets as provided in the [GENRE](https://github.com/facebookresearch/GENRE/blob/main/scripts_genre/download_all_datasets.sh) repo. The dataset can be used to train and evaluate entity disambiguators.
The datasets can be imported easily as follows:
```
from datasets import load_dataset
ds = load_dataset("boragokbakan/entity_disambiguation", "aida")
```
Available dataset names are:
- `blink`
- `ace2004`
- `aida`
- `aquaint`
- `blink`
- `clueweb`
- `msnbc`
- `wiki`
**Note:** As the BLINK training set is very large in size (~10GB), it is advised to set `streaming=True` when calling `load_dataset`. |
Francesco/cotton-20xz5 | 2023-03-30T09:20:12.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': cotton
'1': G-arboreum
'2': G-barbadense
'3': G-herbaceum
'4': G-hirsitum
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: cotton-20xz5
tags:
- rf100
---
# Dataset Card for cotton-20xz5
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/cotton-20xz5
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
cotton-20xz5
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/cotton-20xz5
### Citation Information
```
@misc{ cotton-20xz5,
title = { cotton 20xz5 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/cotton-20xz5 } },
url = { https://universe.roboflow.com/object-detection/cotton-20xz5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
mstz/isolet | 2023-04-20T09:50:41.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"isolet",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_isolet_54,
author = {Cole,Ron & Fanty,Mark},
title = {{ISOLET}},
year = {1994},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C51G69}}
} | null | 0 | 18 | ---
language:
- en
tags:
- isolet
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Isolet
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- isolet
license: cc
---
# Isolet
The [Isolet dataset](https://archive-beta.ics.uci.edu/dataset/54/isolet) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|--------------------------|
| isolet | Multiclass classification | What letter was uttered? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/isolet", "isolet")["train"]
``` |
AlekseyKorshuk/roleplay-io | 2023-04-05T21:44:58.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 8 | 18 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 2495441
num_examples: 3146
download_size: 1543319
dataset_size: 2495441
---
# Dataset Card for "roleplay-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/liver | 2023-04-16T17:33:33.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"ilpd",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_ilpd_(indian_liver_patient_dataset)_225,
author = {Ramana,Bendi & Venkateswarlu,N.},
title = {{ILPD (Indian Liver Patient Dataset)}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5D02C}}
} | null | 1 | 18 | ---
language:
- en
tags:
- ilpd
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Liver
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- liver
license: cc
---
# ILPD
The [ILPD dataset](https://archive.ics.uci.edu/ml/datasets/ILPD) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------|
| liver | Binary classification | Does the patient have liver problems? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/liver")["train"]
``` |
tomekkorbak/shp_with_features_20k | 2023-04-14T09:02:38.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20532157.0
num_examples: 9459
- name: test
num_bytes: 20532157.0
num_examples: 9459
download_size: 23638147
dataset_size: 41064314.0
---
# Dataset Card for "shp_with_features_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cestwc/hdb0420 | 2023-04-20T04:50:47.000Z | [
"region:us"
] | cestwc | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: '0420'
num_bytes: 334028
num_examples: 3110
- name: '0110'
num_bytes: 16067
num_examples: 110
- name: '0327'
num_bytes: 317961
num_examples: 3000
download_size: 318187
dataset_size: 668056
---
# Dataset Card for "hdb0420"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_medicine-neg | 2023-04-20T05:27:33.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 51318
num_examples: 173
download_size: 33920
dataset_size: 51318
---
# Dataset Card for "mmlu-college_medicine-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlh/uci-shopper | 2023-05-03T21:08:59.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | jlh | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: Administrative
dtype: int64
- name: Administrative_Duration
dtype: float64
- name: Informational
dtype: int64
- name: Informational_Duration
dtype: float64
- name: ProductRelated
dtype: int64
- name: ProductRelated_Duration
dtype: float64
- name: BounceRates
dtype: float64
- name: ExitRates
dtype: float64
- name: PageValues
dtype: float64
- name: SpecialDay
dtype: float64
- name: Month
dtype: string
- name: OperatingSystems
dtype: int64
- name: Browser
dtype: int64
- name: Region
dtype: int64
- name: TrafficType
dtype: int64
- name: VisitorType
dtype: string
- name: Weekend
dtype: bool
- name: Revenue
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 1815486
num_examples: 12330
download_size: 425014
dataset_size: 1815486
license: cc-by-4.0
task_categories:
- tabular-classification
language:
- en
pretty_name: Online Shoppers Purchasing Intention Dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for Online Shoppers Purchasing Intention Dataset
## Dataset Description
- **Homepage**: https://archive-beta.ics.uci.edu/dataset/468/online+shoppers+purchasing+intention+dataset
### Dataset Summary
This dataset is a reupload of the Online Shoppers Purchasing Intention Dataset from the [UCI Machine Learning Repository](https://archive-beta.ics.uci.edu/).
> **NOTE:** The information below is from the original dataset description from UCI's website.
>
> ### Overview
>
> Of the 12,330 sessions in the dataset, 84.5% (10,422) were negative class samples that did not end with shopping,
> and the rest (1908) were positive class samples ending with shopping.
>
> #### Additional Information
>
> The dataset consists of feature vectors belonging to 12,330 sessions. The dataset was formed so that
> each session would belong to a different user in a 1-year period to avoid any tendency to a specific campaign,
> special day, user profile, or period.
|
generative-newsai/news-unmasked | 2023-04-27T14:30:14.000Z | [
"task_categories:image-to-text",
"region:us"
] | generative-newsai | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: section
dtype: string
- name: headline
dtype: string
- name: image_id
dtype: string
splits:
- name: train
num_bytes: 5084636867.984
num_examples: 48988
- name: test
num_bytes: 1360809852.398
num_examples: 12247
download_size: 1331950856
dataset_size: 6445446720.382
task_categories:
- image-to-text
pretty_name: NewsUnmasked
---
# Dataset Card for "news-unmasked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kevinjesse/typebert | 2023-04-30T18:33:40.000Z | [
"region:us"
] | kevinjesse | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11927159712
num_examples: 2906228
- name: validation
num_bytes: 70371288
num_examples: 17147
- name: test
num_bytes: 70371288
num_examples: 17147
download_size: 851542645
dataset_size: 12067902288
---
# Dataset Card for "typebert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zetavg/zh-tw-wikipedia-dev | 2023-05-06T12:40:39.000Z | [
"region:us"
] | zetavg | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: pageid
dtype: int64
- name: html
dtype: string
- name: markdown
dtype: string
- name: coordinate
struct:
- name: globe
dtype: string
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: length
dtype: int64
- name: touched
dtype: string
- name: lastrevid
dtype: int64
- name: original_title
dtype: string
splits:
- name: train
num_bytes: 8657481.515956817
num_examples: 1000
download_size: 5008132
dataset_size: 8657481.515956817
---
A small subset of [`zetavg/zh-tw-wikipedia`](https://huggingface.co/datasets/zetavg/zh-tw-wikipedia) that contains only 1,000 randomly picked rows. For development usage. |
oyxy2019/THUCNewsText | 2023-05-10T03:05:21.000Z | [
"region:us"
] | oyxy2019 | null | null | null | 1 | 18 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': education
'1': entertainment
'2': fashion
'3': finance
'4': game
'5': politic
'6': society
'7': sport
'8': stock
'9': technology
splits:
- name: train
num_bytes: 126435258
num_examples: 50000
- name: validation
num_bytes: 12851939
num_examples: 5000
- name: test
num_bytes: 25321290
num_examples: 9890
download_size: 110495565
dataset_size: 164608487
---
# Dataset Card for "THUCNewsText"
这是[seamew/THUCNewsText](https://huggingface.co/datasets/seamew/THUCNewsText)的克隆,试图解决谷歌硬盘国内无法访问的问题443
```python
from datasets import load_dataset
datasets = load_dataset("seamew/THUCNewsText")
datasets.push_to_hub("oyxy2019/THUCNewsText")
``` |
FreedomIntelligence/huatuo_consultation_qa | 2023-05-17T03:21:36.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | FreedomIntelligence | null | null | null | 8 | 18 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 1M<n<10M
---
# Dataset Card for huatuo_consultation_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded.
We directly crawl patient’s questions and doctor’s answers as QA pairs, getting 32,708,346 pairs. Subsequently, we removed the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs.
**Please note that for some reasons we cannot directly provide text data, so the answer part of our data set is url. If you want to use text data, you can refer to the other two parts of our open source datasets ([huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa)、[huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa)), or use url for data collection.**
## Dataset Creation
### Source Data
....
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ybelkada/oasst1-tiny-subset | 2023-05-11T14:07:03.000Z | [
"region:us"
] | ybelkada | null | null | null | 1 | 18 | ---
dataset_info:
features:
- name: messages
dtype: string
splits:
- name: train
num_bytes: 59104494.0
num_examples: 39663
- name: test
num_bytes: 6567166.0
num_examples: 4407
download_size: 38767143
dataset_size: 65671660.0
---
# Dataset Card for "oasst1-tiny-subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lighteval/synthetic_reasoning_natural | 2023-05-12T09:30:32.000Z | [
"region:us"
] | lighteval | null | 3 | 18 | Entry not found | ||
Thaweewat/chatmed-5k-th | 2023-05-27T19:59:34.000Z | [
"size_categories:1K<n<10K",
"language:th",
"region:us"
] | Thaweewat | null | null | null | 0 | 18 | ---
language:
- th
size_categories:
- 1K<n<10K
--- |
whu9/arxiv_summarization_postprocess | 2023-06-03T04:49:04.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 6992115668
num_examples: 197465
- name: validation
num_bytes: 216277493
num_examples: 6435
- name: test
num_bytes: 216661725
num_examples: 6439
download_size: 3553348742
dataset_size: 7425054886
---
# Dataset Card for "arxiv_summarization_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abokbot/wikipedia-first-paragraph | 2023-06-04T10:58:32.000Z | [
"language:en",
"wikipedia",
"region:us"
] | abokbot | null | null | null | 0 | 18 | ---
language:
- en
tags:
- wikipedia
---
# Dataset Description
This dataset contains the first paragraph of cleaned Wikipedia articles in English.
It was obtained by transorming the [Wikipedia](https://huggingface.co/datasets/wikipedia) "20220301.en" dataset as follows:
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20220301.en")["train"]
def get_first_paragraph(example):
example["text"] = example['text'].split('\n\n')[0]
return example
dataset = dataset.map(get_first_paragraph)
```
# Why use this dataset?
The size of the original English Wikipedia dataset is over 20GB. It takes 20min to load it on a Google Colab notebook and running computations on that dataset can be costly.
If you want to create a use case that mostly needs the information in the first paragraph of a Wikipedia article (which is the paragraph with the most important information), this 'wikipedia-first-paragraph' dataset is for you.
Its size is 1.39GB and it takes 5 min to load it on a Google colab notebook.
# How to load dataset
You can load it by runnning:
```python
from datasets import load_dataset
load_dataset("abokbot/wikipedia-first-paragraph")
```
# Dataset Structure
An example looks as follows:
```
{
'id': '12',
'url': 'https://en.wikipedia.org/wiki/Anarchism',
'title': 'Anarchism',
'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects \
all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, \
which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, \
placed on the farthest left of the political spectrum, it is usually described alongside communalism \
and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and \
has a strong historical association with anti-capitalism and socialism.'
}
``` |
Norquinal/claude_evol_instruct_210k | 2023-07-17T04:10:04.000Z | [
"region:us"
] | Norquinal | null | null | null | 12 | 18 | This dataset is the result of roughly 250k instruction/response pairs being generated by Claude, with instances of blatant alignment removed.
213375 instructions remain.
This dataset is experimental in two ways:
1. From start to finish, it was generated entirely synthetically through Anthropic's Claude AI.
2. It was generated using a somewhat imperfect recreation of the evol-instruct method. 50k instructions were initially synthetically generated then ran through four epochs of evol-instruct. |
alxfgh/ChEMBL_Drug_Instruction_Tuning | 2023-06-24T03:22:42.000Z | [
"task_categories:question-answering",
"language:en",
"region:us"
] | alxfgh | null | null | null | 1 | 18 | ---
task_categories:
- question-answering
language:
- en
pretty_name: ChEMBL Drug Instruction Tuning
---
# Dataset Card for ChEMBL Drug Instruction Tuning
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
PNLPhub/PEYMA | 2023-08-13T07:55:04.000Z | [
"license:apache-2.0",
"region:us"
] | PNLPhub | PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. | \@article{shahshahani2018peyma,
title={PEYMA: A Tagged Corpus for Persian Named Entities},
author={Mahsa Sadat Shahshahani and Mahdi Mohseni and Azadeh Shakery and Heshaam Faili},
year=2018,
journal={ArXiv},
volume={abs/1801.09936}
} | null | 0 | 18 | ---
license: apache-2.0
dataset_info:
config_name: PEYMA
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B_DAT
'2': B_LOC
'3': B_MON
'4': B_ORG
'5': B_PCT
'6': B_PER
'7': B_TIM
'8': I_DAT
'9': I_LOC
'10': I_MON
'11': I_ORG
'12': I_PCT
'13': I_PER
'14': I_TIM
splits:
- name: train
num_bytes: 4885030
num_examples: 8028
- name: test
num_bytes: 648919
num_examples: 1026
- name: validation
num_bytes: 535910
num_examples: 925
download_size: 0
dataset_size: 6069859
---
|
eddsterxyz/Raiders-Of-The-Lost-Kek | 2023-06-25T19:36:37.000Z | [
"arxiv:2001.07487",
"region:us"
] | eddsterxyz | null | null | null | 0 | 18 | # Raiders Of The Lost Kek
The largest 4chan /pol/ dataset.
I extracted the post content, removed HTML nonesense, and 4chan specific things
like post number replies in text, etc.
## There are a few sizes of datasets available
- 100kLines - first 100,000 lines of text from the dataset
- 300kLines - first 300,000 lines of text from the dataset
- 500kLines - first 500,000 lines of text from the dataset
maybe at some point once i have the compute ill upload the whole thing
link : https://arxiv.org/abs/2001.07487 |
gabeorlanski/bc-transcoder | 2023-07-18T16:22:39.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|transcoder",
"language:en",
"license:apache-2.0",
"code",
"arxiv:2302.01973",
"arxiv:2006.03511",
"region:... | gabeorlanski | The Transcoder dataset in BabelCode format. Currently supports translation from C++ and Python. | @article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{roziere2020unsupervised,
title={Unsupervised translation of programming languages},
author={Roziere, Baptiste and Lachaux, Marie-Anne and Chanussot, Lowik and Lample, Guillaume},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
} | null | 2 | 18 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- translation
language:
- en
tags:
- code
pretty_name: BabelCode Transcoder
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|transcoder
---
# Dataset Card for BabelCode Transcoder
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The [Transcoder](https://github.com/facebookresearch/CodeGen) dataset in BabelCode format. Currently supports translation from C++ and Python.
### Supported Tasks and Leaderboards
### Languages
BC-Transcoder supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-transcoder")
DatasetDict({
test: Dataset({
features: ['qid', 'title', 'language', 'signature', 'arguments', 'source_py', 'source_cpp', 'question_info'],
num_rows: 8384
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `signature`: The signature for the problem.
- `arguments`: The arguments of the problem.
- `source_py`: The source solution in Python.
- `source_cpp`: The source in C++.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
For information on the original curation of the Transcoder Dataset, please see [Unsupervised Translation of Programming Languages](https://arxiv.org/pdf/2006.03511.pdf) by Roziere et. al.
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{roziere2020unsupervised,
title={Unsupervised translation of programming languages},
author={Roziere, Baptiste and Lachaux, Marie-Anne and Chanussot, Lowik and Lample, Guillaume},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}
``` |
joonhok-exo-ai/korean_law_open_data_precedents | 2023-07-05T08:43:35.000Z | [
"size_categories:10K<n<100K",
"language:ko",
"license:openrail",
"legal",
"region:us"
] | joonhok-exo-ai | null | null | null | 2 | 18 | ---
language:
- ko
tags:
- legal
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [김준호](mailto:joonhok@smartfitnow.com)
### Dataset Summary
[법제처 국가법령 공동활용 센터](https://open.law.go.kr/LSO/main.do)에서 제공되는 전체 판례 데이터셋입니다.
## Dataset Structure
### Data Instances
개별 데이터의 모양은 아래와 같습니다.
판례 본문 조회 API의 출력 결과 필드를 대체로 따랐으나, 그 중 "법원종류코드" 와 "사건종류코드"는 제외했고 "판시유형" 필드는 실제 응답에서는 "판결유형"이어서 실제 응답 값대로 사용하였습니다. 마지막으로 "판례내용" 필드는 "전문" 으로 대체하였습니다.
```
{
'판례정보일련번호': 101924
'사건명': '손해배상'
'사건번호': '85다카1594'
'선고일자': 19860722,
'선고': '선고'
'법원명': '대법원'
'사건종류명': '민사'
'판결유형': '판결'
'판시사항': '가. 미성년자가 부모의 개호를 받을 수 있는 경우, 손해로서의 개호인 비용 / 나. 호프만식계산법에 의한 일실이익 산정의 적부 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우, 단리연금 현가율이 20을 넘는 경우의 일실이익 산정방법'
'판결요지': '가. 신체의 부자유로 인하여 개호인의 조력을 받을 필요가 있는 경우에는 비록 피해자가 미성년자이고 그의 부모가 개호를 할 수 있는 형편에 있다 하더라도 반드시 그 부모의 개호를 받아야 한다고 단정할 수 없음은 물론, 가사 그 부모의 개호를 받게 된다고 하더라도 이로 인하여 피해자가 입는 손해는 특별한 사정이 없는 한 통상의 개호인 비용 전액이다. 나. 호프만식계산법에 의하여 중간이자를 공제하여 장래의 일실이익의 현가를 산정하는 것은 위법한 것이 아니다. 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우에 단리연금현가율이 20을 넘는 경우에는 그 단리연금현가율을 그대로 적용하여 그 현가를 산정하게 되면 현가로 받게 되는 금액의 이자가 매월 입게 되는 손해액보다 많게 되어 손해액보다 더 많은 금원을 배상하게 되는 불합리한 결과를 가져오게 되므로 그 단리연금현가율이 결과적으로 20을 넘는 경우에 있어서는 그 수치표상의 단리연금현가율이 얼마인지를 불문하고 모두 20을 적용 계산함으로써 피해자가 과잉배상을 받는 일이 없도록 하여야 한다.'
'참조조문': '가.나.다. 민법 제763조'
'참조판례': '나. 대법원 1981.9.22 선고 81다588 판결, 1985.10.22 선고 85다카819 판결 / 다. 대법원 1985.10.22 선고 85다카819 판결, 1986.3.25 선고 85다카2375 판결'
'판결유형': '판결'
'전문': '【원고, 피상고인】 (...이하 생략...)'
}
```
### Data Fields
다른 필드들은 특별한 설명이 필요 없겠으나, "선고일자" 필드의 값은 스트링이 아니고 숫자입니다. 또, 일부 데이터의 "선고일자" 필드 값에는 월, 일 정보가 누락되고 연 정보만 남아 있어서 자리수가 4자리인 경우도 있습니다.
그리고 "사건명" 등 일부 필드는 값이 없는 경우도 있으니 참고 바랍니다.
## Dataset Creation
### Curation Rationale
이 데이터셋의 판례 데이터들은 공동활용 API를 통해서도 접근 가능하지만,
1. API 방식으로는 전체 데이터를 순회하는 것이 까다롭고
2. API 응답 데이터를 매번 파싱하고 전처리하는 번거로움이 있으며
3. 일부 API 응답 데이터에 있는 오류를 미리 정제하기 위하여
이 데이터셋을 만들게 되었습니다.
### Source Data
#### Initial Data Collection and Normalization
이 데이터셋은 국가법령 공동활용 센터의 "판례 목록 조회 API"와 "판례 본문 조회 API"를 이용하여 데이터를 수집하였습니다.
먼저 판례 목록 조회 API를 호출해 판례정보 일련번호들을 수집한 뒤, 각각의 일련번호로 판례 본문 조회 API를 호출하여 판례 데이터를 수집하였습니다.
판례 본문을 조회할 때는 XML과 HTML 두 가지 형식으로 요청할 수 있는데, 데이터의 완결성 검증 및 정제 작업을 위해
전체 데이터에 대해 두 가지 형식으로 모두 요청을 보낸 뒤 두 응답 데이터를 비교해 보았고, 일부 데이터에서 요청 형식에 따라
데이터 값이 다른 것을 확인하였습니다.
예를 들어 판례정보 일련번호가 152179인 판례 데이터를 XML과 HTML 형식으로 요청했을 때 "전문" 중 "【원심판결】" 부분은 각각 아래와 같습니다.
XML 형식으로 요청했을 때:
```
"1. 서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결(이하 ‘제1원심판결’이라고 한다) / 2. 서울중앙지방법원 2009. 5. 8. 선고 2009고합416 판결(이하 ‘제2원심판결’이라고 한다)"
```
HTML 형식으로 요청했을 때:
```
서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결
```
이렇게 요청 형식에 따라 "【원심판결】" 부분이 다른 데이터가 수십건 있었고 이 데이터셋에는 더 많은 정보를 담고 있는 데이터로(위 사례에서는 XML 형식 데이터) 사용하였습니다.
그 밖에도 두 가지 형식 모두에서 데이터 자체에 잘못된 데이터가 포함되는 등(법령 하이퍼링크 포맷이 깨진 경우, 익명화 포맷이 잘못된 경우 등) 오류가 있는 경우들이
몇 건 있었는데 이 데이터들은 수작업으로 수정하였습니다.
마지막으로 일부 데이터는 이미지를 포함하고 있는 경우가 있었는데 이미지들은 전부 생략하고 텍스트 부분만 포함하였습니다.
본문 데이터에 오류가 있어 수작업으로 수정한 데이터 목록: 212537, 188351, 188019, 200567
이미지를 포함하고 있는 데이터 목록:
184135,
182916,
186027,
185375,
184151,
184597,
186156,
184655,
185123,
198440,
197577
## Additional Information
### Dataset Curators
김준호([링크드인](https://www.linkedin.com/in/joonho-kim/)): 이 데이터셋은 인공지능 법률 서비스를 만들고 있는 제가 직접 필요해서 만들게 되었습니다.
### Contributions
혹시 데이터 중 잘못된 부분을 발견하신 분은 [joonhok@smartfitnow.com](mailto:joonhok@smartfitnow.com)로 연락 주시면
확인 후 반영하겠습니다. |
carbon225/vndb_img | 2023-07-04T14:46:14.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:odbl",
"art",
"not-for-all-audiences",
"anime",
"visual-novel",
"nsfw",
"vndb",
"region:us"
] | carbon225 | null | null | null | 0 | 18 | ---
license: odbl
task_categories:
- image-classification
tags:
- art
- not-for-all-audiences
- anime
- visual-novel
- nsfw
- vndb
size_categories:
- 100K<n<1M
---
# Dataset Card for VNDB IMG
## Dataset Description
This is a 🤗 Datasets loader for the [vndb.org](https://vndb.org) image database dump.
It contains anime-style images flagged by users according to these categories:
* sexual content: safe/suggestive/explicit
* violence: tame/violent/brutal
## Loading Instructions
For licensing and "moral" reasons, the database dump has to be downloaded manually.
Please download the vndb.org database dump manually from <https://vndb.org/d14>.
Download the "Near-complete database" `vndb-db-latest.tar.zst` file.
Use `rsync` to download the 'Images' collection.
Create the following directory structure:
```
my/dataset/path
├── db
│ └── vndb-db-latest.tar.zst
└── vndb-img # this is the directory you downloaded with rsync
├── ch
├── cv
├── sf
├── st
└── ...
```
Inside `my/dataset/path/db` run
```
zstd -d vndb-db-latest.tar.zst
```
and
```
tar -xf vndb-db-latest.tar
```
The final directory structure should look like this:
```
my/dataset/path
├── db
│ ├── vndb-db-latest.tar
│ ├── vndb-db-latest.tar.zst
│ ├── db
│ └── ...
└── vndb-img
├── ch
├── cv
├── sf
├── st
└── ...
```
Finally, load the dataset
```python
datasets.load_dataset('carbon225/vndb_img', data_dir='my/dataset/path')
```
## Dataset Structure
The following fields are provided:
```python
{
'index': datasets.Value('int32'),
'id': datasets.Value('string'),
'width': datasets.Value('int32'),
'height': datasets.Value('int32'),
'c_votecount': datasets.Value('int32'),
'c_sexual_avg': datasets.Value('int32'),
'c_sexual_stddev': datasets.Value('int32'),
'c_violence_avg': datasets.Value('int32'),
'c_violence_stddev': datasets.Value('int32'),
'c_weight': datasets.Value('int32'),
'type': datasets.ClassLabel(names=['character', 'cover', 'screenshot_full', 'screenshot_thumb']),
'sexual_class': datasets.ClassLabel(names=['safe', 'suggestive', 'explicit']),
'violence_class': datasets.ClassLabel(names=['tame', 'violent', 'brutal']),
'file_name': datasets.Value('string'),
'full_path': datasets.Value('string'),
'image': datasets.Image(),
}
```
## Supported Tasks
With a few modifications the data can be used for:
* image classification of NSFW material
* image generation/super-resolution/...
* ...
## Considerations for Using the Data
The images are ***hardcore***, to say the least. I recommend not looking.
## Licensing Information
Using this dataset requires the user to download data manually from vndb.org.
All information on VNDB is made available under the Open Database License.
Any rights in individual contents of the database are licensed under the Database Contents License.
With the following exceptions:
* Anime data is obtained from the AniDB.net UDP API and is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0.
* Images, visual novel descriptions and character descriptions are gathered from various online sources and may be subject to separate license conditions. |
ssbuild/alpaca_baize | 2023-07-09T06:18:38.000Z | [
"license:apache-2.0",
"region:us"
] | ssbuild | null | null | null | 2 | 18 | ---
license: apache-2.0
---
|
vietgpt/legal_document_vi | 2023-07-10T07:38:37.000Z | [
"region:us"
] | vietgpt | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: meta
struct:
- name: effective_date
dtype: string
- name: issuing_agency
dtype: string
- name: promulgation_date
dtype: string
- name: sign_number
dtype: string
- name: signer
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7629124128
num_examples: 424187
download_size: 2641487859
dataset_size: 7629124128
---
# Dataset Card for "legal_document_vi1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/makeup-detection-dataset | 2023-09-19T19:35:55.000Z | [
"task_categories:image-to-image",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of photos featuring the same individuals captured in two
distinct scenarios - *with and without makeup*. The dataset contains a diverse
range of individuals with various *ages, ethnicities and genders*. The images
themselves would be of high quality, ensuring clarity and detail for each
subject.
In photos with makeup, it is applied **to only specific parts** of the face,
such as *eyes, lips, or skin*.
In photos without makeup, individuals have a bare face with no visible
cosmetics or beauty enhancements. These images would provide a clear contrast
to the makeup images, allowing for significant visual analysis. | @InProceedings{huggingface:dataset,
title = {makeup-detection-dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 18 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
- image-classification
tags:
- code
dataset_info:
features:
- name: no_makeup
dtype: image
- name: with_makeup
dtype: image
- name: part
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 25845965
num_examples: 26
download_size: 25248180
dataset_size: 25845965
---
# Makeup Detection Dataset
The dataset consists of photos featuring the same individuals captured in two distinct scenarios - *with and without makeup*. The dataset contains a diverse range of individuals with various *ages, ethnicities and genders*. The images themselves would be of high quality, ensuring clarity and detail for each subject.
In photos with makeup, it is applied **to only specific parts** of the face, such as *eyes, lips, or skin*.
In photos without makeup, individuals have a bare face with no visible cosmetics or beauty enhancements. These images would provide a clear contrast to the makeup images, allowing for significant visual analysis.
### The dataset's possible applications:
- facial recognition
- beauty consultations and personalized recommendations
- augmented reality and filters in photography apps
- social media and influencer marketing
- dermatology and skincare

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=makeup-detection-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **no_makeup**: includes images of people *without* makeup
- **with_makeup**: includes images of people *wearing makeup*. People are the same as in the previous folder, photos are identified by the same name
- **.csv** file: contains information about people in the dataset
### File with the extension .csv
includes the following information for each set of media files:
- **no_makeup**: link to the photo of a person without makeup,
- **with_makeup**: link to the photo of the person with makeup,
- **part**: body part of makeup's application,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Images for makeup detection might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=makeup-detection-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
jxie/bbbp | 2023-08-04T22:25:59.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train_0
num_bytes: 112140
num_examples: 1631
- name: val_0
num_bytes: 18772
num_examples: 204
- name: test_0
num_bytes: 15004
num_examples: 204
- name: train_1
num_bytes: 112140
num_examples: 1631
- name: val_1
num_bytes: 18772
num_examples: 204
- name: test_1
num_bytes: 15004
num_examples: 204
- name: train_2
num_bytes: 112140
num_examples: 1631
- name: val_2
num_bytes: 18772
num_examples: 204
- name: test_2
num_bytes: 15004
num_examples: 204
download_size: 218838
dataset_size: 437748
---
# Dataset Card for "bbbp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChristophSchuhmann/LAION-Aesthetics-mix | 2023-08-08T14:47:02.000Z | [
"license:apache-2.0",
"region:us"
] | ChristophSchuhmann | null | null | null | 0 | 18 | ---
license: apache-2.0
---
|
Venkatesh4342/indian-augmented-NER | 2023-08-11T11:19:41.000Z | [
"license:apache-2.0",
"region:us"
] | Venkatesh4342 | null | null | null | 0 | 18 | ---
license: apache-2.0
---
|
thomasavare/waste-classification-audio-deepl | 2023-08-30T00:43:39.000Z | [
"region:us"
] | thomasavare | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: speaker
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: Class
dtype: string
- name: Class_index
dtype: float64
splits:
- name: train
num_bytes: 397554225.0
num_examples: 500
download_size: 300753479
dataset_size: 397554225.0
---
# Dataset Card for "waste-classification-audio-deepl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sarahpann/AMPS | 2023-08-20T20:27:43.000Z | [
"region:us"
] | sarahpann | null | null | null | 0 | 18 | Entry not found |
Suchinthana/si-wikipedia | 2023-09-07T06:37:25.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:si",
"license:cc-by-sa-4.0",
"region:us"
] | Suchinthana | null | null | null | 0 | 18 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 135560965
num_examples: 22574
download_size: 52870930
dataset_size: 135560965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- si
pretty_name: wikiped
size_categories:
- 10K<n<100K
--- |
yair-elboher/render-heb-oscar | 2023-08-29T08:29:11.000Z | [
"region:us"
] | yair-elboher | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 86429.0
num_examples: 9
- name: validation
num_bytes: 48578.0
num_examples: 4
download_size: 161145
dataset_size: 135007.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "render-heb-oscar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JasiekKaczmarczyk/maestro-quantized | 2023-08-24T07:19:03.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart_bin
sequence: int16
length: 128
- name: duration_bin
sequence: int16
length: 128
- name: velocity_bin
sequence: int16
length: 128
splits:
- name: train
num_bytes: 48324585
num_examples: 43727
- name: validation
num_bytes: 5451233
num_examples: 4929
- name: test
num_bytes: 6294739
num_examples: 5695
download_size: 14057918
dataset_size: 60070557
---
# Dataset Card for "maestro-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PericlesSavio/contratacao | 2023-09-19T14:48:05.000Z | [
"region:us"
] | PericlesSavio | null | null | null | 0 | 18 | Entry not found |
explodinggradients/WikiEval | 2023-09-18T15:12:16.000Z | [
"region:us"
] | explodinggradients | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: question
dtype: string
- name: context_v1
sequence: string
- name: context_v2
sequence: string
- name: ungrounded_answer
dtype: string
- name: source
dtype: string
- name: poor_answer
dtype: string
splits:
- name: train
num_bytes: 548755
num_examples: 50
download_size: 354738
dataset_size: 548755
---
# WikiEval
Dataset for to do correlation analysis of difference metrics proposed in [Ragas](https://github.com/explodinggradients/ragas)
This dataset was generated from 50 pages from Wikipedia with edits post 2022.
## Column description
* question: a question that can be answered from the given Wikipedia page (source).
* source: The source Wikipedia page from which the question and context are generated.
* grounded_answer: answer grounded on context_v1
* ungrounded_answer: answer generated without context_v1
* poor_answer: answer with poor relevancy compared to grounded_answer and ungrounded_answer
* context_v1: Ideal context to answer the given question
* contetx_v2: context that contains redundant information compared to context_v1 |
afg1/epmc-oa-subset | 2023-09-18T22:51:36.000Z | [
"license:other",
"region:us"
] | afg1 | null | null | null | 0 | 18 | ---
license: other
---
# Europe PubMedCentral Open Access subset
This is the open access subset of Europe PMC, as of 25/08/2023. The source xml updates weekly, so this is just a snapshot for now.
To read more about the open access subset, you can go here: https://europepmc.org/downloads/openaccess
Shortly, there should be about 5.5 million articles in theopen access subset, for each of which the whole fulltext of the article is available.
This dataset is ~975 parquet files, each of which is the fulltext of all articles in a range of PMCIDs. Older collections have fewer articles in, and are smaller.
The total dataset is ~42GB of compressed parquet. I have no clue what it will be decompressed but my guess is big.
The fulltext was extracted from the xml dumps located here: https://europepmc.org/ftp/oa/
The XML is in a standardised format: JATS. I took an implementation of a JATS parser from here: and modified it to extract only a few fields.
Each parquet will have the following:
| pmcid | pmid | authors | title | publication_date | keywords | abstract | main_text |
|--------|------|---------|-------|------------------|----------|----------|-----------|
| str | str |list[str]| str | str |list[str] | str | str |
|
adirik/fashion_image_caption-100 | 2023-08-29T10:41:48.000Z | [
"region:us"
] | adirik | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 22842342.0
num_examples: 100
download_size: 22823708
dataset_size: 22842342.0
---
# Dataset Card for "fashion_image_caption-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vikp/evol_instruct_v2_filtered_109k | 2023-08-29T19:49:45.000Z | [
"region:us"
] | vikp | null | null | null | 1 | 18 | ---
dataset_info:
features:
- name: idx
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: rendered
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 512830593.9343947
num_examples: 109797
download_size: 252022478
dataset_size: 512830593.9343947
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "evol_instruct_v2_filtered_109k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aqubed/kub_tickets_small | 2023-09-04T23:08:41.000Z | [
"region:us"
] | aqubed | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: number
dtype: int64
- name: title
dtype: string
- name: state
dtype: string
- name: created_at
dtype: string
- name: updated_at
dtype: string
- name: closed_at
dtype: string
- name: assignees
sequence: string
- name: labels
sequence: string
- name: reporter
dtype: string
- name: comments
list:
- name: body
dtype: string
- name: created_at
dtype: string
- name: events
list:
- name: author
dtype: string
- name: created_at
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 5967498
num_examples: 1099
download_size: 1380020
dataset_size: 5967498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kub_tickets_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sabbir2023/Ar | 2023-09-10T08:07:35.000Z | [
"region:us"
] | Sabbir2023 | null | null | null | 0 | 18 | Entry not found |
quocanh34/test_result_large_synthesis_data_ver2 | 2023-09-10T15:26:56.000Z | [
"region:us"
] | quocanh34 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: w2v2_baseline_transcription
dtype: string
- name: w2v_baseline_norm
dtype: string
splits:
- name: train
num_bytes: 208073
num_examples: 1299
download_size: 109270
dataset_size: 208073
---
# Dataset Card for "test_result_large_synthesis_data_ver2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrm8488/FloCo | 2023-09-10T23:24:25.000Z | [
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"code",
"region:us"
] | mrm8488 | null | null | null | 2 | 18 | ---
dataset_info:
- config_name: test
features:
- name: image
dtype: image
- name: code_caption
dtype: string
splits:
- name: train
num_bytes: 142134244.412
num_examples: 1188
download_size: 124563800
dataset_size: 142134244.412
- config_name: train
features:
- name: image
dtype: image
- name: code_caption
dtype: string
splits:
- name: train
num_bytes: 946697073.77
num_examples: 10102
download_size: 853815350
dataset_size: 946697073.77
- config_name: validation
features:
- name: image
dtype: image
- name: code_caption
dtype: string
splits:
- name: train
num_bytes: 95790792
num_examples: 594
download_size: 73916515
dataset_size: 95790792
configs:
- config_name: test
data_files:
- split: train
path: test/train-*
- config_name: train
data_files:
- split: train
path: train/train-*
- config_name: validation
data_files:
- split: train
path: validation/train-*
task_categories:
- image-to-image
tags:
- code
pretty_name: FloCo
size_categories:
- 10K<n<100K
---
# FloCo Dataset
From: https://vl2g.github.io/projects/floco/
We introduce a new large-scale dataset called "FloCo" for Flowchart images to Python Codes conversion. It contains 11,884 paired flowchart-code samples. Please refer to the paper for more details regarding statistics and dataset construction.
```
@inproceedings{shukla2023floco,
author = "Shukla, Shreya and
Gatti, Prajwal and
Kumar, Yogesh and
Yadav, Vikash and
Mishra, Anand",
title = "Towards Making Flowchart Images Machine Interpretable",
booktitle = "ICDAR",
year = "2023",
}
``` |
maximegmd/medqa_alpaca_format | 2023-09-12T11:27:26.000Z | [
"region:us"
] | maximegmd | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: solution
dtype: string
splits:
- name: test
num_bytes: 1184018
num_examples: 1273
- name: train
num_bytes: 9249332
num_examples: 10178
download_size: 5933919
dataset_size: 10433350
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# Dataset Card for "medqa_alpaca_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Gryphe/CoEdit-Alpaca | 2023-09-14T11:28:44.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | Gryphe | null | null | null | 2 | 18 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
An Alpaca instruction conversion of [Grammarly's CoEdIT](https://huggingface.co/datasets/grammarly/coedit) dataset. |
yuchenlin/rex-instruct | 2023-09-13T22:10:24.000Z | [
"region:us"
] | yuchenlin | null | null | null | 0 | 18 | Entry not found |
neilmiaowang/cdpcli-llama | 2023-09-20T17:46:37.000Z | [
"region:us"
] | neilmiaowang | null | null | null | 0 | 18 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.