id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
bigbio/cpi | 2023-01-06T03:46:05.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The compound-protein relationship (CPI) dataset consists of 2,613 sentences from abstracts containing annotations of proteins, small molecules, and their relationships | @article{doring2020automated,
title={Automated recognition of functional compound-protein relationships in literature},
author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others},
journal={Plos one},
volume={15},
number={3},
pages={e0220925},
year={2020},
publisher={Public Library of Science San Francisco, CA USA}
} | 1 | 15 | 2023-01-06T03:44:03 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: ISC
pretty_name: CPI
homepage: https://github.com/KerstenDoering/CPI-Pipeline
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for CPI
## Dataset Description
- **Homepage:** https://github.com/KerstenDoering/CPI-Pipeline
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The compound-protein relationship (CPI) dataset consists of 2,613 sentences
from abstracts containing annotations of proteins, small molecules, and their
relationships.
## Citation Information
```
@article{doring2020automated,
title={Automated recognition of functional compound-protein relationships in literature},
author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others},
journal={Plos one},
volume={15},
number={3},
pages={e0220925},
year={2020},
publisher={Public Library of Science San Francisco, CA USA}
}
```
| 1,216 | [
[
-0.0170135498046875,
-0.00974273681640625,
0.016815185546875,
-0.0014715194702148438,
-0.019622802734375,
-0.035888671875,
-0.006465911865234375,
-0.01187896728515625,
0.007843017578125,
0.0287322998046875,
-0.03448486328125,
-0.038665771484375,
-0.0433959960937... |
bstds/job_titles | 2023-02-14T19:34:23.000Z | [
"region:us"
] | bstds | null | null | 0 | 15 | 2023-02-14T19:31:04 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
splits:
- name: train
num_bytes: 2451067
num_examples: 73380
download_size: 1258591
dataset_size: 2451067
---
# Dataset Card for "job_titles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Normalized dataset of 70k job titles | 422 | [
[
-0.00830078125,
-0.005245208740234375,
0.00494384765625,
0.001983642578125,
-0.015289306640625,
-0.02001953125,
0.00420379638671875,
-0.0112152099609375,
0.04779052734375,
0.060211181640625,
-0.038360595703125,
-0.072998046875,
-0.054718017578125,
0.00172519... |
koutch/stackoverflow_python | 2023-03-27T15:22:32.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | koutch | null | null | 15 | 15 | 2023-02-20T09:44:08 | ---
dataset_info:
features:
- name: title
dtype: string
- name: question_id
dtype: int64
- name: question_body
dtype: string
- name: question_score
dtype: int64
- name: question_date
dtype: string
- name: answer_id
dtype: int64
- name: answer_body
dtype: string
- name: answer_score
dtype: int64
- name: answer_date
dtype: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 2142466142
num_examples: 987122
download_size: 829547986
dataset_size: 2142466142
task_categories:
- question-answering
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "stackoverflow_python"
### Dataset Summary
This dataset comes originally from [kaggle](https://www.kaggle.com/stackoverflow/pythonquestions).
It was originally split into three tables (CSV files) (Questions, Answers, and Tags)
now merged into a single table. Each row corresponds to a pair (question-answer) and
their associated tags.
The dataset contains all questions asked between August 2, 2008 and Ocotober 19, 2016.
### Supported Tasks and Leaderboards
This might be useful for open-domain question-answering tasks.
## Additional information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required. | 1,327 | [
[
-0.03485107421875,
-0.07171630859375,
0.006847381591796875,
0.00742340087890625,
0.006622314453125,
-0.005710601806640625,
0.002288818359375,
-0.0147247314453125,
0.01343536376953125,
0.05126953125,
-0.045654296875,
-0.0304718017578125,
-0.0111236572265625,
... |
HuggingFaceH4/helpful_instructions | 2023-03-27T22:25:58.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"instruct",
"human-feedback",
"region:us"
] | HuggingFaceH4 | Helpful Instructions is a dataset of (prompt, completion) pairs that are derived from a variety of public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. | """
_DESCRIPTION = | 7 | 15 | 2023-03-03T10:08:01 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- instruct
- human-feedback
pretty_name: Helpful Instructions
dataset_info:
- config_name: self_instruct
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 24378246
num_examples: 82612
download_size: 12589487
dataset_size: 24378246
- config_name: super_natural_instructions
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 43352923
num_examples: 50000
download_size: 22605900
dataset_size: 43352923
- config_name: prompt_source
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 59843768
num_examples: 52657
download_size: 23607134
dataset_size: 59843768
- config_name: all
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 127574937
num_examples: 185269
download_size: 58901460
dataset_size: 127574937
---
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, completion)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions", name="all")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", name="self_instruct")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Changelog
* March 5, 2023: `v1.0.0` release, with subsets from `HuggingFaceH4/self_instruct` (`self_instruct`, `super_natural_instructions`, `prompt_source`) | 2,621 | [
[
-0.0200347900390625,
-0.04791259765625,
0.0178680419921875,
0.0232696533203125,
-0.0160369873046875,
-0.0297088623046875,
-0.0177001953125,
0.002788543701171875,
0.022918701171875,
0.033050537109375,
-0.0675048828125,
-0.052886962890625,
-0.034271240234375,
... |
bbaaaa/iwslt14-de-en | 2023-04-04T02:05:40.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:de",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bbaaaa | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | 0 | 15 | 2023-03-07T07:09:44 | ---
annotations_creators:
- crowdsourced
language:
- de
- en
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2014
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2014
---
# Dataset Card for IWSLT 2014
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 171721
- name: test
num_examples: 4698
- name: validation
num_examples: 887
| 750 | [
[
-0.045257568359375,
-0.00966644287109375,
0.0124969482421875,
0.046630859375,
-0.03448486328125,
0.01207733154296875,
0.0037288665771484375,
-0.01062774658203125,
-0.00861358642578125,
0.0302581787109375,
-0.072265625,
-0.048583984375,
-0.04815673828125,
0.0... |
katarinagresova/Genomic_Benchmarks_human_enhancers_ensembl | 2023-03-13T19:36:04.000Z | [
"region:us"
] | katarinagresova | null | null | 2 | 15 | 2023-03-13T19:35:47 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 34821392
num_examples: 123872
- name: test
num_bytes: 8668172
num_examples: 30970
download_size: 4077057
dataset_size: 43489564
---
# Dataset Card for "Genomic_Benchmarks_human_enhancers_ensembl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.052703857421875,
-0.023040771484375,
0.0006346702575683594,
0.005523681640625,
-0.00225830078125,
0.0269622802734375,
0.0206451416015625,
-0.01512908935546875,
0.051177978515625,
0.031951904296875,
-0.04620361328125,
-0.04290771484375,
-0.037384033203125,
... |
Dahoas/rl-prompt-dataset | 2023-03-17T14:08:30.000Z | [
"region:us"
] | Dahoas | null | null | 2 | 15 | 2023-03-17T13:57:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 331075688.0
num_examples: 201417
- name: test
num_bytes: 7649255
num_examples: 5103
download_size: 206459232
dataset_size: 338724943.0
---
# Dataset Card for "rl-prompt-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 543 | [
[
-0.045806884765625,
-0.0298309326171875,
0.01593017578125,
0.01470184326171875,
-0.01288604736328125,
0.01007080078125,
0.01519775390625,
-0.004436492919921875,
0.0555419921875,
0.028350830078125,
-0.08551025390625,
-0.050201416015625,
-0.0285186767578125,
0... |
bigbio/ggponc2 | 2023-04-05T01:15:05.000Z | [
"multilinguality:monolingual",
"language:de",
"region:us"
] | bigbio | The GGPONC project aims to provide a freely distributable corpus of German medical text for NLP researchers.
Clinical guidelines are particularly suitable to create such corpora, as they contain no protected health information
(PHI), which distinguishes them from other kinds of medical text.
The second version of the corpus (GGPONC 2.0) consists of 30 German oncology guidelines with 1.87 million tokens.
It has been completely manually annotated on the entity level by 7 medical students using the INCEpTION platform over a
time frame of 6 months in more than 1200 hours of work. This makes GGPONC 2.0 the largest annotated, freely
distributable corpus of German medical text at the moment.
Annotated entities are Findings (Diagnosis / Pathology, Other Finding), Substances (Clinical Drug, Nutrients / Body
Substances, External Substances) and Procedures (Therapeutic, Diagnostic), as well as Specifications for these entities.
In total, annotators have created more than 200000 entity annotations. In addition, fragment relationships have been
annotated to explicitly indicate elliptical coordinated noun phrases, a common phenomenon in German text. | @inproceedings{borchert-etal-2022-ggponc,
title = "{GGPONC} 2.0 - The {G}erman Clinical Guideline Corpus for Oncology: Curation Workflow, Annotation Policy, Baseline {NER} Taggers",
author = "Borchert, Florian and
Lohr, Christina and
Modersohn, Luise and
Witt, Jonas and
Langer, Thomas and
Follmann, Markus and
Gietzelt, Matthias and
Arnrich, Bert and
Hahn, Udo and
Schapranow, Matthieu-P.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.389",
pages = "3650--3660",
} | 4 | 15 | 2023-04-01T16:49:04 | ---
language:
- de
bigbio_language:
- German
multilinguality: monolingual
pretty_name: GGPONC2
homepage: https://www.leitlinienprogramm-onkologie.de/projekte/ggponc-english/
bigbio_pubmed: false
bigbio_public: flase
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GGPONC2
## Dataset Description
- **Homepage:** https://www.leitlinienprogramm-onkologie.de/projekte/ggponc-english/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The GGPONC project aims to provide a freely distributable corpus of German medical text for NLP researchers.
Clinical guidelines are particularly suitable to create such corpora, as they contain no protected health information
(PHI), which distinguishes them from other kinds of medical text.
The second version of the corpus (GGPONC 2.0) consists of 30 German oncology guidelines with 1.87 million tokens.
It has been completely manually annotated on the entity level by 7 medical students using the INCEpTION platform over a
time frame of 6 months in more than 1200 hours of work. This makes GGPONC 2.0 the largest annotated, freely
distributable corpus of German medical text at the moment.
Annotated entities are Findings (Diagnosis / Pathology, Other Finding), Substances (Clinical Drug, Nutrients / Body
Substances, External Substances) and Procedures (Therapeutic, Diagnostic), as well as Specifications for these entities.
In total, annotators have created more than 200000 entity annotations. In addition, fragment relationships have been
annotated to explicitly indicate elliptical coordinated noun phrases, a common phenomenon in German text.
## Citation Information
```
@inproceedings{borchert-etal-2022-ggponc,
title = "{GGPONC} 2.0 - The {G}erman Clinical Guideline Corpus for Oncology: Curation Workflow, Annotation Policy, Baseline {NER} Taggers",
author = "Borchert, Florian and
Lohr, Christina and
Modersohn, Luise and
Witt, Jonas and
Langer, Thomas and
Follmann, Markus and
Gietzelt, Matthias and
Arnrich, Bert and
Hahn, Udo and
Schapranow, Matthieu-P.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.389",
pages = "3650--3660",
}
```
| 2,434 | [
[
-0.02923583984375,
-0.041656494140625,
0.04132080078125,
0.0033740997314453125,
-0.0295562744140625,
-0.039306640625,
-0.041229248046875,
-0.046051025390625,
0.00946807861328125,
0.045806884765625,
-0.0172271728515625,
-0.06646728515625,
-0.06402587890625,
0... |
relbert/analogy_questions_private | 2023-04-02T15:07:46.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [Analogy Question](https://aclanthology.org/2021.acl-long.280/) | @inproceedings{ushio-etal-2021-bert,
title = "{BERT} is to {NLP} what {A}lex{N}et is to {CV}: Can Pre-Trained Language Models Identify Analogies?",
author = "Ushio, Asahi and
Espinosa Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.280",
doi = "10.18653/v1/2021.acl-long.280",
pages = "3609--3624",
abstract = "Analogies play a central role in human commonsense reasoning. The ability to recognize analogies such as {``}eye is to seeing what ear is to hearing{''}, sometimes referred to as analogical proportions, shape how we structure knowledge and understand language. Surprisingly, however, the task of identifying such analogies has not yet received much attention in the language model era. In this paper, we analyze the capabilities of transformer-based language models on this unsupervised task, using benchmarks obtained from educational settings, as well as more commonly used datasets. We find that off-the-shelf language models can identify analogies to a certain extent, but struggle with abstract and complex relations, and results are highly sensitive to model architecture and hyperparameters. Overall the best results were obtained with GPT-2 and RoBERTa, while configurations using BERT were not able to outperform word embedding models. Our results raise important questions for future work about how, and to what extent, pre-trained language models capture knowledge about abstract semantic relations.",
} | 0 | 15 | 2023-04-01T21:13:15 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Analogy Question
---
# Dataset Card for "relbert/analogy_questions"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/)
- **Dataset:** Analogy Questions
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
- original analogy questions
| name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
|-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
| `sat_full`| -/374 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
| `sat` | 37/337 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
| `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
| `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
- extra analogy questions
| name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference |
|:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
| `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
| `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
| `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) |
| `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) |
## Dataset Structure
### Data Instances
An example of `test` looks as follows.
```
{
"stem": ["raphael", "painter"],
"answer": 2,
"choice": [["andersen", "plato"],
["reading", "berkshire"],
["marx", "philosopher"],
["tolstoi", "edison"]]
}
```
The `stem` is the query word pair, `choice` has word pair candidates,
and `answer` indicates the index of correct candidate which starts from `0`.
All data is lowercased except Google dataset.
### Citation Information
```
@inproceedings{ushio-etal-2021-bert-is,
title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?},
author={Ushio, Asahi and
Espinosa-Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose},
booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference},
year={2021},
publisher={Association for Computational Linguistics}
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
| 4,795 | [
[
-0.04730224609375,
-0.06317138671875,
0.0240020751953125,
0.0045013427734375,
-0.0201568603515625,
-0.017059326171875,
-0.003864288330078125,
-0.026702880859375,
0.054595947265625,
0.022705078125,
-0.047149658203125,
-0.0433349609375,
-0.0220489501953125,
0.... |
pythainlp/thailaw | 2023-05-21T14:34:49.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:th",
"license:cc0-1.0",
"legal",
"region:us"
] | pythainlp | null | null | 3 | 15 | 2023-04-03T16:23:09 | ---
dataset_info:
features:
- name: sysid
dtype: string
- name: title
dtype: string
- name: txt
dtype: string
splits:
- name: train
num_bytes: 825923852
num_examples: 42755
download_size: 190585391
dataset_size: 825923852
license: cc0-1.0
task_categories:
- text-generation
language:
- th
tags:
- legal
size_categories:
- 10K<n<100K
---
# Dataset Card for "thailaw"
## English
Thai Law Dataset (Act of Parliament)
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Download [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This hub based on [Thailaw v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2).
## Thai
คลังข้อมูลกฎหมายไทย (พระราชบัญญัติ)
- ข้อมูลเก็บรวบรวมมาจากเว็บไซต์สำนักงานคณะกรรมการกฤษฎีกา [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- โครงการนี้เป็นส่วนหนึ่งในแผนพัฒนา [PyThaiNLP](https://github.com/PyThaiNLP/)
- ข้อมูลที่รวบรวมในคลังข้อความนี้เป็นสาธารณสมบัติ (public domain) ตามพ.ร.บ.ลิขสิทธิ์ พ.ศ. 2537 มาตรา 7 (สิ่งต่อไปนี้ไม่ถือว่าเป็นงานอันมีลิขสิทธิ์ตามพระราชบัญญัตินี้ (1) ข่าวประจำวัน และข้อเท็จจริงต่างๆ ที่มีลักษณะเป็นเพียงข่าวสารอันมิใช่งานในแผนกวรรณคดี แผนกวิทยาศาสตร์ หรือแผนกศิลปะ [...] (3) ระเบียบ ข้อบังคับ ประกาศ คำสั่ง คำชี้แจง และหนังสือตอบโต้ของกระทรวง ทบวง กรม หรือหน่วยงานอื่นใดของรัฐหรือของท้องถิ่น [...])
ดาวน์โหลดได้ที่ [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This dataset is Thai Law dataset v0.2
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Datasize: 42,755 row
GitHub: [https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2) | 1,992 | [
[
-0.00872039794921875,
-0.0198516845703125,
0.01445770263671875,
0.0304107666015625,
-0.0694580078125,
-0.0177459716796875,
-0.005390167236328125,
-0.004116058349609375,
0.019317626953125,
0.055938720703125,
-0.002864837646484375,
-0.04449462890625,
-0.0191955566... |
liuyanchen1015/MULTI_VALUE_sst2_comparative_than | 2023-04-03T19:43:53.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 15 | 2023-04-03T19:43:48 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3000
num_examples: 19
- name: test
num_bytes: 5884
num_examples: 38
- name: train
num_bytes: 70824
num_examples: 631
download_size: 34685
dataset_size: 79708
---
# Dataset Card for "MULTI_VALUE_sst2_comparative_than"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 580 | [
[
-0.0258026123046875,
-0.004238128662109375,
0.01340484619140625,
-0.00021135807037353516,
-0.02655029296875,
0.0222320556640625,
0.013397216796875,
-0.0131072998046875,
0.050994873046875,
0.00727081298828125,
-0.042388916015625,
-0.038665771484375,
-0.0478820800... |
climatebert/climate_commitments_actions | 2023-04-18T16:12:44.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | climatebert | null | null | 1 | 15 | 2023-04-11T13:11:49 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ClimateCommitmentsActions
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373387
dataset_size: 666342
---
# Dataset Card for climate_commitments_actions
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for identifying climate-related paragraphs about climate commitments and actions in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given climate-related paragraph is about climate commitments and actions or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 0
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> not talking about climate commitmens and actions, 1 -> talking about climate commitmens and actions)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | 4,511 | [
[
-0.015716552734375,
-0.02294921875,
0.019805908203125,
0.00970458984375,
-0.0234222412109375,
-0.006694793701171875,
-0.01522064208984375,
-0.040130615234375,
0.0222625732421875,
0.03350830078125,
-0.04736328125,
-0.055419921875,
-0.04425048828125,
-0.000929... |
hackathon-somos-nlp-2023/winogrande_train_s_spanish | 2023-04-14T19:40:59.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:es",
"license:gpl-3.0",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | 3 | 15 | 2023-04-13T17:56:35 | ---
license: gpl-3.0
task_categories:
- text-classification
language:
- es
pretty_name: Winogrande in Spanish
size_categories:
- n<1K
---
This is the Spanish version of Winogrande Small (640 instances) for training only.
The translation was done manually by a group of experts. The dataset will still be improved in the future.
we also acknowledge Somos-NLP for this achievement. | 381 | [
[
-0.0168304443359375,
-0.0018100738525390625,
0.0260162353515625,
0.03082275390625,
-0.01018524169921875,
-0.005756378173828125,
-0.0267486572265625,
-0.04345703125,
0.04205322265625,
0.040069580078125,
-0.056427001953125,
-0.022552490234375,
-0.051239013671875,
... |
mstz/wine_origin | 2023-04-16T18:06:09.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"wine_origin",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_wine_origin_database_generator_(version_2)_108,
author = {Breiman,L. & Stone,C.J.},
title = {{Waveform Database Generator (Version 2)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C56014}}
} | 0 | 15 | 2023-04-14T16:22:09 | ---
language:
- en
tags:
- wine_origin
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Wine Origin
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- wine_origin
- wine_origin_0
- wine_origin_1
- wine_origin_2
license: cc
---
# Wine Origin
The [Wine Origin dataset](https://archive-beta.ics.uci.edu/dataset/109/wine) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wine_origin | Multiclass classification.| |
| wine_origin_0 | Binary classification. | Is the instance of class 0? |
| wine_origin_1 | Binary classification. | Is the instance of class 1? |
| wine_origin_2 | Binary classification. | Is the instance of class 2? | | 996 | [
[
-0.02691650390625,
-0.0248870849609375,
0.0171661376953125,
0.01361846923828125,
-0.01209259033203125,
-0.0003237724304199219,
-0.0033283233642578125,
-0.0195465087890625,
0.0180511474609375,
0.041717529296875,
-0.053375244140625,
-0.03643798828125,
-0.037567138... |
ChristophSchuhmann/wikipedia-en-nov22-1-sentence-level | 2023-04-19T06:01:38.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | 2 | 15 | 2023-04-19T05:31:35 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bjoernp/tagesschau-2018-2023 | 2023-04-27T09:04:08.000Z | [
"size_categories:10K<n<100K",
"language:de",
"region:us"
] | bjoernp | null | null | 4 | 15 | 2023-04-27T07:49:50 | ---
dataset_info:
features:
- name: date
dtype: string
- name: headline
dtype: string
- name: short_headline
dtype: string
- name: short_text
dtype: string
- name: article
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 107545823
num_examples: 21847
download_size: 63956047
dataset_size: 107545823
language:
- de
size_categories:
- 10K<n<100K
---
# Tagesschau Archive Article Dataset
A scrape of Tagesschau.de articles from 01.01.2018 to 26.04.2023. Find all source code in [github.com/bjoernpl/tagesschau](https://github.com/bjoernpl/tagesschau).
## Dataset Information
CSV structure:
| Field | Description |
| --- | --- |
| `date` | Date of the article |
| `headline` | Title of the article |
| `short_headline` | A short headline / Context |
| `short_text` | A brief summary of the article |
| `article` | The full text of the article |
| `href` | The href of the article on tagesschau.de |
Size:
The final dataset (2018-today) contains 225202 articles from 1942 days. Of these articles only
21848 are unique (Tagesschau often keeps articles in circulation for ~1 month). The total download
size is ~65MB.
Cleaning:
- Duplicate articles are removed
- Articles with empty text are removed
- Articles with empty short_texts are removed
- Articles, headlines and short_headlines are stripped of leading and trailing whitespace
More details in [`clean.py`](https://github.com/bjoernpl/tagesschau/blob/main/clean.py). | 1,504 | [
[
-0.02484130859375,
-0.0338134765625,
0.0145416259765625,
0.028472900390625,
-0.03643798828125,
-0.007472991943359375,
0.00036525726318359375,
-0.034759521484375,
0.05255126953125,
0.0174560546875,
-0.049102783203125,
-0.040435791015625,
-0.0163421630859375,
... |
Snit/french-conversation | 2023-04-29T06:41:29.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:fr",
"license:cc-by-4.0",
"french",
"region:us"
] | Snit | null | null | 4 | 15 | 2023-04-29T05:51:26 | ---
license: cc-by-4.0
language:
- fr
tags:
- french
task_categories:
- conversational
size_categories:
- 1K<n<10K
---
+15 hours of speech data from TTS and text file recording.
+9k utterances from various sources, novels, parliamentary debates, professional language. | 268 | [
[
-0.004535675048828125,
-0.041046142578125,
0.025634765625,
0.02850341796875,
-0.0288848876953125,
0.033935546875,
-0.01084136962890625,
-0.046112060546875,
0.0201416015625,
0.046173095703125,
-0.0379638671875,
-0.0269012451171875,
-0.03594970703125,
0.005184... |
ChristophSchuhmann/1-sentence-level-gutenberg-en_arxiv_pubmed_soda | 2023-04-30T09:30:25.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | 0 | 15 | 2023-04-30T09:07:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mehnaazasad/arxiv_astro_co_ga | 2023-05-10T02:47:29.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:1905.00075",
"region:us"
] | mehnaazasad | null | null | 0 | 15 | 2023-05-10T01:54:30 | ---
license: mit
task_categories:
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for `arxiv_astro_co_ga`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a dataset consisting of titles and abstracts for all Cosmology and Galaxy Astrophysics arXiv articles to date (99,659 papers).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
```
{'title': 'Probing cluster formation under extreme conditions: massive star clusters in blue compact galaxies',
'abstract': ' The numerous and massive young star clusters in blue compact galaxies (BCGs) are used to investigate the properties of their hosts. We test whether BCGs follow claimed relations between cluster populations and their hosts, such as the the fraction of the total luminosity contributed by the clusters as function of the mean star formation rate density; the $V$ band luminosity of the brightest youngest cluster as related to the mean host star formation rate; and the cluster formation efficiency (i.e., the fraction of star formation happening in star clusters) versus the density of the SFR. We find that BCGs follow the trends, supporting a scenario where cluster formation and environmental properties of the host are correlated. They occupy, in all the diagrams, the regions of higher SFRs, as expected by the extreme nature of the starbursts operating in these systems. We find that the star clusters contribute almost to the 20 % of the UV luminosity of the hosts. We suggest that the BCG starburst environment has most likely favoured the compression and collapse of the giant molecular clouds, enhancing the local star formation efficiency, so that massive clusters have been formed. The estimated cluster formation efficiency supports this scenario. BCGs have a cluster formation efficiency comparable to luminous IR galaxies and spiral starburst nuclei (the averaged value is about 35 %) which is much higher than the 8 - 10 % reported for quiescent spirals and dwarf star-forming galaxies. '
}
```
### Data Fields
- `title`: Title of the paper
- `abstract`: The abstract of the paper
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for these splits.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 79,727 |
| Validation | 9966 |
| Test | 9966 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The original dataset from which this subset was constructed can be found here: [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Various authors.
### Annotations
This dataset contains no annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No author information included in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by ArXiv, huge thanks to the team for building and maintaining that dataset.
### Licensing Information
The arxiv_astro_co_ga dataset version 1.0.0 is released under the [MIT License](https://mitsloan.mit.edu/licensing).
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
[More Information Needed] | 5,266 | [
[
-0.04864501953125,
-0.06500244140625,
0.01305389404296875,
-0.001010894775390625,
-0.0089874267578125,
0.0016031265258789062,
-0.02716064453125,
-0.03057861328125,
0.04290771484375,
0.020050048828125,
-0.04095458984375,
-0.050567626953125,
-0.0271148681640625,
... |
lexlms/legal_lama | 2023-07-24T13:13:15.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-sa-4.0",
"... | lexlms | LegalLAMA: Legal LAnguage Model Analysis (LAMA) (LAMA) dataset. | @inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = july,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/xxx",
} | 6 | 15 | 2023-05-10T16:07:14 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- masked-language-modeling
pretty_name: LegalLAMA
tags:
- legal
- law
---
# Dataset Card for "LegalLAMA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
LegalLAMA is a diverse probing benchmark suite comprising 8 sub-tasks that aims to assess the acquaintance of legal knowledge that PLMs acquired in pre-training.
### Dataset Specifications
| Corpus | Corpus alias | Examples | Avg. Tokens | Labels |
|--------------------------------------|----------------------|-----------|-------------|--------|
| Criminal Code Sections (Canada) | `canadian_sections` | 321 | 72 | 144 |
| Legal Terminology (EU) | `cjeu_term` | 2,127 | 164 | 23 |
| Contractual Section Titles (US) | `contract_sections` | 1,527 | 85 | 20 |
| Contract Types (US) | `contract_types` | 1,089 | 150 | 15 |
| ECHR Articles (CoE) | `ecthr_articles` | 5,072 | 69 | 13 |
| Legal Terminology (CoE) | `ecthr_terms` | 6,803 | 97 | 250 |
| Crime Charges (US) | `us_crimes` | 4,518 | 118 | 59 |
| Legal Terminology (US) | `us_terms` | 5,829 | 308 | 7 |
### Usage
Load a specific sub-corpus, given the corpus alias, as presented above.
```python
from datasets import load_dataset
dataset = load_dataset('lexlms/legal_lama', name='ecthr_terms')
```
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/)
```
@inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
pages = "15513--15535",
}
``` | 3,227 | [
[
-0.02423095703125,
-0.0323486328125,
0.03076171875,
0.011627197265625,
-0.033111572265625,
-0.00510406494140625,
-0.0150604248046875,
-0.0293731689453125,
0.027191162109375,
0.0369873046875,
-0.0191802978515625,
-0.07568359375,
-0.0333251953125,
0.0104446411... |
VMware/open-instruct-v1-oasst-dolly-hhrlhf | 2023-07-13T14:21:14.000Z | [
"language:en",
"region:us"
] | VMware | null | null | 15 | 15 | 2023-05-10T23:36:12 | ---
language: en
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: alpaca_prompt
dtype: string
- name: response
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 60252132
num_examples: 62971
download_size: 33232110
dataset_size: 60252132
---
# Dataset Card for "open-instruct-v1-oasst-dolly-hhrlhf"
This dataset is a combination of:
1. Filtered subset of[OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
2. train split of [Mosaic-dolly-hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) (consists of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)).
## Dataset
The dataset consists of 3 columns:
1. instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf)
2. alpaca_prompt: Alpaca prompt template versions of instruction
3. response: The response to the instruction
## License
- It is usable for commercial purposes so long as you follow the terms of the license.
- Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
- Wikipedia (various pages) - https://www.wikipedia.org/
- Copyright © Wikipedia editors and contributors.
- Databricks (https://www.databricks.com)
- Copyright © Databricks
- Mosaic ML (https://www.mosaicml.com/)
- Copyright © Mosaic ML
- VMware
- Copyright © VMware
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,776 | [
[
-0.048126220703125,
-0.03961181640625,
0.0013570785522460938,
0.037811279296875,
-0.035919189453125,
-0.019775390625,
0.01221466064453125,
-0.0212554931640625,
0.0321044921875,
0.057464599609375,
-0.07098388671875,
-0.04608154296875,
-0.040802001953125,
0.00... |
lighteval/summarization | 2023-05-12T08:52:49.000Z | [
"region:us"
] | lighteval | Scenario for single document text summarization.
Currently supports the following datasets:
1. XSum (https://arxiv.org/pdf/1808.08745.pdf)
2. CNN/DailyMail non-anonymized (https://arxiv.org/pdf/1704.04368.pdf)
Task prompt structure
Summarize the given document.
Document: {tok_1 ... tok_n}
Summary: {tok_1 ... tok_m}
Example from XSum dataset
Document: {Part of the Broad Road was closed to traffic on Sunday at about 18:00 GMT.
The three adults and three children have been taken to Altnagelvin Hospital
with non life-threatening injuries. The Fire Service, Northern Ireland Ambulance Service
and police attended the crash. The Broad Road has since been reopened.}
Summary: {Three adults and three children have been taken to hospital following a crash involving
a tractor and a campervan in Limavady, County Londonderry} | null | 2 | 15 | 2023-05-12T08:33:56 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
Nan-Do/code-search-net-java | 2023-05-15T00:57:06.000Z | [
"task_categories:text2text-generation",
"task_categories:summarization",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"code",
"java",
"CodeSearchNet",
"summary",
"region:us"
] | Nan-Do | null | null | 3 | 15 | 2023-05-13T02:03:07 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1595060592
num_examples: 495953
download_size: 440273784
dataset_size: 1595060592
license: apache-2.0
task_categories:
- text2text-generation
- summarization
- text-generation
language:
- en
tags:
- code
- java
- CodeSearchNet
- summary
pretty_name: Java CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-java"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-Java
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Java portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Java
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 | 2,418 | [
[
-0.032562255859375,
-0.0245819091796875,
0.004711151123046875,
0.0178680419921875,
-0.0179290771484375,
-0.008941650390625,
-0.016021728515625,
-0.01038360595703125,
0.051239013671875,
0.04150390625,
-0.04046630859375,
-0.06768798828125,
-0.0350341796875,
0.... |
tasksource/nlgraph | 2023-05-23T07:36:04.000Z | [
"region:us"
] | tasksource | null | null | 0 | 15 | 2023-05-23T07:34:04 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: difficulty
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 6593495
num_examples: 5022
- name: test
num_bytes: 1270263
num_examples: 1000
download_size: 1448275
dataset_size: 7863758
---
# Dataset Card for "nlgraph"
```bib
@article{wang2023can,
title={Can Language Models Solve Graph Problems in Natural Language?},
author={Wang, Heng and Feng, Shangbin and He, Tianxing and Tan, Zhaoxuan and Han, Xiaochuang and Tsvetkov, Yulia},
journal={arXiv preprint arXiv:2305.10037},
year={2023}
}
``` | 673 | [
[
-0.01239013671875,
-0.058990478515625,
0.009918212890625,
0.0021991729736328125,
-0.01024627685546875,
0.0155792236328125,
-0.0013713836669921875,
-0.048492431640625,
0.031341552734375,
0.05194091796875,
-0.04693603515625,
-0.05010986328125,
-0.034637451171875,
... |
Brand24/mms | 2023-08-23T21:49:55.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:mixed",
"multilinguality:multi-lingual",
"size_categories:1M<n<10M",
"language:ar",
"language:bg",
"language:bs",
"language:cs",
"language:de",
"language:el",
"language:en",
"language:es",
"la... | Brand24 | This work presents the most extensive open massively multi-lingual corpus of datasets for training sentiment models.
The corpus consists of 79 manually selected from over 350 datasets reported in the scientific literature based on strict quality criteria and covers 25 languages.
Datasets can be queried using several linguistic and functional features.
In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies. | @misc{augustyniak2023massively,
title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark},
author={Łukasz Augustyniak and Szymon Woźniak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Mikołaj Morzy and Tomasz Kajdanowicz},
year={2023},
eprint={2306.07902},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 2 | 15 | 2023-05-24T12:07:06 | ---
annotations_creators:
- mixed
language:
- ar
- bg
- bs
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hi
- hr
- hu
- it
- ja
- lv
- pl
- pt
- ru
- sk
- sl
- sq
- sr
- sv
- th
- ur
- zh
license:
- other
multilinguality:
- multi-lingual
size_categories:
- 1M<n<10M
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Massive-Multilingual-Sentiment
---
# Massive Multilingual Sentiment Corpora (MMS)
## Corpora Summary
Despite impressive advancements in multilingual corpora collection and model training, developing large-scale deployments of multilingual models still presents a significant challenge. This is particularly true for language tasks that are culture-dependent. One such example is the area of multilingual sentiment analysis, where affective markers can be subtle and deeply ensconced in culture.
This work presents the most extensive open massively multilingual corpus of datasets for training sentiment models. The corpus consists of 79 manually selected from over 350 datasets reported in the scientific literature based on strict quality criteria and covers 27 languages. Datasets can be queried using several linguistic and functional features. In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies.
More about dataset here [https://brand24-ai.github.io/mms_benchmark](https://brand24-ai.github.io/mms_benchmark).
## General licenses information
This is a library of the open-sourced datasets that we gathered. We provide citations or links to sources of these datasets. It is essential to mention that these datasets could have different licenses, and we encourage everybody to check the permissions of each dataset separately. It is critical because, for example, not all datasets will be available for commercial purposes. This ensures that proper consent and permissions are obtained for the use and curation of the data, respecting the rights and privacy of the individuals whose data is included in the datasets. You will cite our library and the authors of each dataset you want to use.
## Usage
```python
import datasets
# whole dataset will be downloaded and cached
mms_dataset = datasets.load_dataset("Brand24/mms")
# filter only texts in Polish
pl = mms_dataset.filter(lambda row: row['language'] == 'pl')
```
## Corpora statistics
### Per language
| language | label_name | count |
|:-----------|:-------------|--------:|
| ar | negative | 138899 |
| ar | neutral | 192774 |
| ar | positive | 600402 |
| bg | negative | 13930 |
| bg | neutral | 28657 |
| bg | positive | 19563 |
| bs | negative | 11974 |
| bs | neutral | 11145 |
| bs | positive | 13064 |
| cs | negative | 39674 |
| cs | neutral | 59200 |
| cs | positive | 97413 |
| de | negative | 104667 |
| de | neutral | 100071 |
| de | positive | 111149 |
| el | negative | 230 |
| el | neutral | 38 |
| el | positive | 232 |
| en | negative | 304939 |
| en | neutral | 290823 |
| en | positive | 1734724 |
| es | negative | 108733 |
| es | neutral | 122493 |
| es | positive | 187486 |
| fa | negative | 1602 |
| fa | neutral | 5091 |
| fa | positive | 6832 |
| fr | negative | 84187 |
| fr | neutral | 43245 |
| fr | positive | 83199 |
| he | negative | 2279 |
| he | neutral | 243 |
| he | positive | 6097 |
| hi | negative | 4992 |
| hi | neutral | 6392 |
| hi | positive | 5615 |
| hr | negative | 19757 |
| hr | neutral | 19470 |
| hr | positive | 38367 |
| hu | negative | 8974 |
| hu | neutral | 17621 |
| hu | positive | 30087 |
| it | negative | 4043 |
| it | neutral | 4193 |
| it | positive | 3829 |
| ja | negative | 83982 |
| ja | neutral | 41979 |
| ja | positive | 83819 |
| lv | negative | 1378 |
| lv | neutral | 2618 |
| lv | positive | 1794 |
| pl | negative | 77422 |
| pl | neutral | 62074 |
| pl | positive | 97192 |
| pt | negative | 56827 |
| pt | neutral | 55165 |
| pt | positive | 45842 |
| ru | negative | 31770 |
| ru | neutral | 48106 |
| ru | positive | 31054 |
| sk | negative | 14431 |
| sk | neutral | 12842 |
| sk | positive | 29350 |
| sl | negative | 33694 |
| sl | neutral | 50553 |
| sl | positive | 29296 |
| sq | negative | 6889 |
| sq | neutral | 14757 |
| sq | positive | 22638 |
| sr | negative | 25089 |
| sr | neutral | 32283 |
| sr | positive | 18996 |
| sv | negative | 16266 |
| sv | neutral | 13342 |
| sv | positive | 11738 |
| th | negative | 9326 |
| th | neutral | 28616 |
| th | positive | 34377 |
| ur | negative | 5239 |
| ur | neutral | 8585 |
| ur | positive | 5836 |
| zh | negative | 117967 |
| zh | neutral | 69016 |
| zh | positive | 144719 |
## Dataset Structure
### Linguistic Typology
The field of language typology focuses on studying the similarities and differences among languages. These differences can be categorized into phonological (sounds), syntactic (structures), lexical (vocabulary), and theoretical aspects. Linguistic typology analyzes the current state of languages, contrasting with genealogical linguistics, which examines historical relationships between languages.
Genealogical linguistics studies language families and genera. A language family consists of languages that share a common ancestral language, while genera are branches within a language family. The Indo-European family, for example, includes genera such as Slavic, Romance, Germanic, and Indic. Over 7000 languages are categorized into approximately 150 language families, with Indo-European, Sino-Tibetan, Turkic, Afro-Asiatic, Nilo-Saharan, Niger-Congo, and Eskimo-Aleut being some of the largest families.
Within linguistic typology, languages are described using various linguistic features. Our work focuses on sentiment classification and selects ten relevant features:
- `text`: The feature text represents the actual text of the sentiment dataset. It is of type string and contains the text samples or sentences for sentiment analysis.
- `label`: The feature label corresponds to the sentiment labels of the text samples. It is of type ClassLabel and has three possible values: negative, neutral, and positive. These labels indicate the sentiment or emotional polarity associated with the text.
- `original_dataset`: The feature original_dataset refers to the name or identifier of the original dataset from which the text samples were extracted. It is of type string and provides information about the source dataset.
- `domain`: The feature domain represents the domain or topic of the sentiment dataset. It is of type string and provides context regarding the subject matter of the text samples.
- `language`: The feature language indicates the language of the text samples in the sentiment dataset. It is of type string and specifies the language in which the text is written.
- `Family`: The feature Family represents the language family to which a specific language belongs. It is of type string and provides information about the broader categorization of languages into language families.
- `Genus`: The feature Genus corresponds to the genus or branch within a language family. It is of type string and indicates the specific subgrouping of languages within a language family.
- `Definite article`: Half of the languages do not use the definite article, which signals uniqueness or definiteness of a concept.
- `Indefinite article`: Half of the languages do not use the indefinite article, with some languages using a separate article or the numeral "one."
- `Number of cases`: Languages vary greatly in the number of morphological cases used.
- `Order of subject, verb, and object`: Different languages have different word orderings, with variations like SOV, SVO, VSO, VOS, OVS, and OSV.
- `Negative morphemes`: Negative morphemes indicate clausal negation in declarative sentences.
- `Polar questions`: Questions with yes/no answers, which can be formed using question particles, interrogative morphology, or intonation.
- `Position of the negative morpheme`: The position of the negative morpheme can vary in relation to subjects and objects.
- `Prefixing vs. suffixing`: Languages differ in their use of prefixes and suffixes in inflectional morphology.
- `Coding of nominal plurals`: Plurals can be expressed through morphological changes or the use of plurality indicator morphemes.
- `Grammatical genders`: Languages vary in the number of grammatical genders used, or may not use the concept at all.
These language features are available as filtering options in our library. Users can download specific facets of the collection, such as datasets in Slavic languages with interrogative word order for polar questions or datasets from the Afro-Asiatic language family without morphological case-making.
### Usage
Code example for loading and filtering Slavic language in which polar questions are formed using the interrogative word order
```python
import datasets
mms_dataset = datasets.load_dataset("Brand24/mms")
slavic = mms_dataset.filter(lambda row: row["Genus"] == "Slavic" and row["Polar questions"] == "interrogative word order")
```
Filtering sentiment datasets from the Afro-Asiatic language family without morphological case-making
```python
afro_asiatic = mms_dataset.filter(lambda row: row["Family"] == "Afro-Asiatic" and row["Number of cases"] == "no morphological case-making")
```
## Dataset Creation
### Who are the source language producers?
The data comes from multiple papers and covers a large variety of languages. For the specific dataset information, please check out the companion paper.
### Annotations
Similarly, like for data producers, you should check papers that propose the specific datasets you are interested in.
#### Annotation process
We describe the annotations process of our internally created dataset in this corpus.
## Considerations for Using the Data
### Social Impact and Limitations
Corpus is intended to bring more sentiment annotated data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the training of state-of-the-art ML models for sentiment analysis.
## Additional Information
### Dataset Curators
The corpus was put together by
- [@laugustyniak](https://www.linkedin.com/in/lukaszaugustyniak/)
- [@swozniak](https://www.linkedin.com/in/wscode/)
- [@mgruza](https://www.linkedin.com/in/marcin-gruza-276b2512b/)
- [@pgramacki](https://www.linkedin.com/in/piotrgramacki/)
- [@krajda](https://www.linkedin.com/in/krzysztof-rajda/)
- [@mmorzy](https://www.linkedin.com/in/mikolajmorzy/)
- [@tkajdanowicz](https://www.linkedin.com/in/kajdanowicz/)
### Licensing Information
These data are released under this licensing scheme.
We do not own any text from which these data and datasets have been extracted.
We license the actual packaging of these data under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) https://creativecommons.org/licenses/by-nc/4.0/
This work is published from Poland.
Should you consider that our data contains material that is owned by you and should, therefore not be reproduced here, please:
* Clearly identify yourself with detailed contact data such as an address, telephone number, or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material claimed to be infringing and the information reasonably sufficient to allow us to locate the material.
We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
### The main corpus citation
```bibtex
@misc{augustyniak2023massively,
title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark},
author={Łukasz Augustyniak and Szymon Woźniak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Mikołaj Morzy and Tomasz Kajdanowicz},
year={2023},
eprint={2306.07902},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### All datasets in corpus
[https://brand24-ai.github.io/mms_benchmark/citations.html](https://brand24-ai.github.io/mms_benchmark/citations.html)
## Acknowledgements
- BRAND24 - https://brand24.com
- CLARIN-PL-Biz - https://clarin.biz
| 13,582 | [
[
-0.055389404296875,
-0.023590087890625,
0.00868988037109375,
0.0262603759765625,
-0.0108184814453125,
0.018646240234375,
-0.0310211181640625,
-0.0109100341796875,
0.041595458984375,
0.0305023193359375,
-0.0374755859375,
-0.06414794921875,
-0.04833984375,
0.0... |
tasksource/tasksource-instruct-v0 | 2023-06-12T15:14:23.000Z | [
"task_categories:text2text-generation",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"... | tasksource | null | null | 16 | 15 | 2023-05-24T14:14:56 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 2839591299.0
num_examples: 4894553
- name: test
num_bytes: 97972920.0
num_examples: 151829
- name: validation
num_bytes: 96766748.0
num_examples: 148634
download_size: 1631162334
dataset_size: 3034330967.0
license: apache-2.0
task_categories:
- text2text-generation
- conversational
- text-generation
- text-classification
- token-classification
- zero-shot-classification
language:
- en
tags:
- instructions
- instruction-tuning
- instruction-finetuning
- flan
- promptsource
- tasksource
pretty_name: tasksource-instruct
size_categories:
- 1M<n<10M
---
# Dataset Card for "tasksource-instruct-v0" (TSI)
Multi-task instruction-tuning data recasted from 485 of the [tasksource](https://github.com/sileod/tasksource) datasets.
Dataset size is capped at 30k examples per task to foster task diversity.
```python
!pip install tasksource, pandit
import tasksource, pandit
df = tasksource.list_tasks(instruct=True).sieve(id=lambda x: 'mmlu' not in x)
for tasks in df.id:
yield tasksource.load_task(task,instruct=True,max_rows=30_000,max_rows_eval=200)
```
https://github.com/sileod/tasksource
## How it differs from flan-v2
TSI is HuggingFace-centric and based on tasksource, a curated collection of HF datasets. It can be scaled to much more examples.
tasksource is focused on discriminative tasks (Classification/TokenClassification/MultipleChoice). The coverage on discriminative tasks is greater than flan.
List of tasks [here](https://github.com/sileod/tasksource/blob/main/tasks.md). Examples of tasks not in Flan V2 include Dynasent (adversarial sentiment analysis), Dynahate (adversarial hate speech detection, discriminative babi, epistemic logic, ruletaker, veridicality, discourse relation prediction, dozens of interesting natural language inference datasets...
TSI answers are mostly short answers to multiple-choice questions, but they target a wide array of problems.
TSI is reasoning intensive, while some flan tasks are not necessarily specific (e.g. generating hypothesis based on premise for NLI).
We explicitly mention that answers should not have explanations, to prevent biasing models toward short answers when using other instruction datasets.
`flan-v2` and `tasksource-instruct` can be combined to improve the reasoning capabilities of LLM.
## Contact and citation:
damien.sileo@inria.fr
https://arxiv.org/abs/2301.05948
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | 2,852 | [
[
-0.0138397216796875,
-0.048980712890625,
0.0236358642578125,
0.011444091796875,
0.01085662841796875,
-0.0164947509765625,
-0.028228759765625,
-0.0194091796875,
-0.008148193359375,
0.034027099609375,
-0.0692138671875,
-0.031402587890625,
-0.04046630859375,
0.... |
xmj2002/Chinese_modern_classical | 2023-05-30T06:26:32.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"region:us"
] | xmj2002 | null | null | 5 | 15 | 2023-05-28T02:14:34 | ---
dataset_info:
features:
- name: info
dtype: string
- name: modern
dtype: string
- name: classical
dtype: string
splits:
- name: train
num_bytes: 209412286
num_examples: 972467
download_size: 123454543
dataset_size: 209412286
license: apache-2.0
task_categories:
- translation
language:
- zh
size_categories:
- 100K<n<1M
---
# Dataset Card for "Chinese_modern_classical"
数据来自于[NiuTrans/Classical-Modern: 非常全的文言文(古文)-现代文平行语料 (github.com)](https://github.com/NiuTrans/Classical-Modern)。
由于原始数据中部分古文没有译文,所以本数据集的数据仅包括了[双语数据 ](https://github.com/NiuTrans/Classical-Modern/tree/main/双语数据)。
| 626 | [
[
-0.02642822265625,
-0.03521728515625,
-0.0304412841796875,
0.01444244384765625,
-0.0762939453125,
-0.005397796630859375,
-0.0171966552734375,
-0.0206451416015625,
0.05224609375,
0.0244140625,
-0.04034423828125,
-0.06231689453125,
-0.00916290283203125,
0.0071... |
Someman/hindi-summarization | 2023-05-30T12:55:13.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:hi",
"license:mit",
"region:us"
] | Someman | null | null | 0 | 15 | 2023-05-30T12:39:11 | ---
license: mit
task_categories:
- summarization
language: hi
original_source: >-
https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus
dataset_info:
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 410722079.5542422
num_examples: 55226
- name: test
num_bytes: 102684238.44575782
num_examples: 13807
- name: valid
num_bytes: 128376473
num_examples: 17265
download_size: 150571314
dataset_size: 641782791
pretty_name: hindi summarization
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- Homepage: https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv
### Dataset Summary
Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
This is a first of its kind Dataset in Hindi which can be used to benchmark models for Text summarization in Hindi. This does not contain articles contained in Hindi Text Short Summarization Corpus which is being released parallely with this Dataset.
The dataset retains original punctuation, numbers etc in the articles.
### Languages
The language is Hindi.
### Licensing Information
MIT
### Citation Information
https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv
### Contributions
| 1,545 | [
[
-0.019195556640625,
-0.050048828125,
0.002468109130859375,
0.035308837890625,
-0.043304443359375,
0.0236053466796875,
-0.02838134765625,
-0.003620147705078125,
0.0258941650390625,
0.0238037109375,
-0.0305023193359375,
-0.047760009765625,
-0.054534912109375,
... |
tianyang/repobench-r | 2023-06-17T03:06:46.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:code",
"license:cc-by-nc-nd-4.0",
"arxiv:2306.03091",
"region:us"
] | tianyang | RepoBench is a dataset that benchmarks repository-level code auto-completion systems.
RepoBench-R denotes RepoBench for Retrieval, which is a sub-task of RepoBench,
aiming to evaluate the ability of code auto-completion systems to retrieve
relevant code snippets for next-line code completion. | @misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 15 | 2023-06-06T00:52:55 | ---
language_creators:
- found
language:
- code
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
pretty_name: RepoBench-Retrieval
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for RepoBench-R
## Dataset Description
- **Homepage:** https://github.com/Leolty/repobench
- **Paper:** https://arxiv.org/abs/2306.03091
## Dataset Summary
**RepoBench-R (Retrieval)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line
code prediction.
## Settings
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
## Supported Tasks
The dataset has 4 subsets:
- `python_cff`: python dataset with `cff` setting.
- `python_cfr`: python dataset with `cfr` setting.
- `java_cff`: java dataset with `cff` setting.
- `java_cfr`: java dataset with `cfr` setting.
Each subset has 4 splits:
- `train_easy`: training set with easy difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( 5 \leq k < 10 \\).
- `train_hard`: training set with hard difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( k \geq 10 \\).
- `test_easy`: testing set with easy difficulty.
- `test_hard`: testing set with hard difficulty.
## Loading Data
For example, if you want to load the `test` `cross_file_first` `python` dataset with `easy` difficulty, you can use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench-r", "python_cff", split="test_easy")
```
> Note: The `split` argument is optional. If not provided, the entire dataset (including, train and test data with easy and hard level) will be loaded.
## Dataset Structure
```json
{
"repo_name": "repository name of the data point",
"file_path": "path/to/file",
"context": [
"snippet 1",
"snippet 2",
// ...
"snippet k"
],
"import_statement": "all import statements in the file",
"gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1
"code": "the code for next-line prediction",
"next_line": "the next line of the code"
}
```
## Licensing Information
CC BY-NC-ND 4.0
## Citation Information
```bibtex
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contributions
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. | 2,992 | [
[
-0.028228759765625,
-0.0170745849609375,
-0.005657196044921875,
0.00939178466796875,
-0.01079559326171875,
0.00542449951171875,
-0.0240631103515625,
-0.00980377197265625,
0.0142974853515625,
0.032257080078125,
-0.047821044921875,
-0.047454833984375,
-0.029022216... |
monadical-labs/minecraft-preview | 2023-06-15T17:08:49.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"minecraft",
"region:us"
] | monadical-labs | null | null | 0 | 15 | 2023-06-06T20:00:41 | ---
license: openrail
language:
- en
tags:
- minecraft
pretty_name: Minecraft Preview Data Set
size_categories:
- 1K<n<10K
---
## Overview
The Minecraft Character Dataset was used to fine-tune the Stable Diffusion [Minecraft Character Preview](https://huggingface.co/monadical-labs/minecraft-preview) model.
It currently consists of 1022 images of forward facing and rear facing 3D renders of various Minecraft character skins.
## Contact Information
You can contact me at: Cory Spencer \<cory@monadical.com\>
[](https://monadical.com/) | 564 | [
[
-0.046844482421875,
-0.0572509765625,
0.03277587890625,
0.0178985595703125,
0.003879547119140625,
0.017852783203125,
0.01323699951171875,
-0.0200042724609375,
0.023193359375,
0.0628662109375,
-0.0811767578125,
-0.053314208984375,
-0.013214111328125,
0.005592... |
Kamaljp/medium_articles | 2023-06-11T09:48:58.000Z | [
"region:us"
] | Kamaljp | null | null | 0 | 15 | 2023-06-11T09:06:37 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: authors
dtype: string
- name: timestamp
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 1044746687
num_examples: 192368
download_size: 601519297
dataset_size: 1044746687
---
# Dataset Card for "medium_articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.04254150390625,
-0.031646728515625,
0.0249176025390625,
0.022735595703125,
-0.02423095703125,
-0.009490966796875,
-0.00914764404296875,
-0.01198577880859375,
0.07281494140625,
0.037506103515625,
-0.04656982421875,
-0.048309326171875,
-0.04168701171875,
-0... |
d0rj/rudetoxifier_data | 2023-06-21T08:14:11.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ru",
"license:mit",
"toxicity",
"style-transfer",
"arxiv:2105.09052",
"region:us"
] | d0rj | null | null | 0 | 15 | 2023-06-19T18:14:39 | ---
dataset_info:
features:
- name: text
dtype: string
- name: toxic
dtype: float64
splits:
- name: train
num_bytes: 27459998
num_examples: 163187
- name: test
num_bytes: 1762288
num_examples: 10000
download_size: 16406619
dataset_size: 29222286
license: mit
task_categories:
- text-classification
- text2text-generation
language:
- ru
multilinguality:
- monolingual
tags:
- toxicity
- style-transfer
pretty_name: RuDetoxifier data
size_categories:
- 100K<n<1M
source_datasets:
- original
paperswithcode_id: methods-for-detoxification-of-texts-for-the
---
# rudetoxifier_data
## Dataset Description
- **Homepage:** https://github.com/s-nlp/rudetoxifier
- **Repository:** https://github.com/s-nlp/rudetoxifier
- **Paper:** [Methods for Detoxification of Texts for the Russian Language](https://arxiv.org/abs/2105.09052)
- **Point of Contact:** [Daryna Dementieva](mailto:daryna.dementieva@skoltech.ru)
Huggingface copy of Github repo with dataset. | 989 | [
[
-0.0016317367553710938,
-0.029388427734375,
0.0213165283203125,
0.0141448974609375,
-0.019500732421875,
-0.01084136962890625,
-0.0101318359375,
0.0012102127075195312,
0.02655029296875,
0.0374755859375,
-0.0380859375,
-0.07269287109375,
-0.041107177734375,
0.... |
causal-lm/auto_cot | 2023-07-13T14:22:28.000Z | [
"language:en",
"region:us"
] | causal-lm | null | null | 1 | 15 | 2023-06-24T14:21:53 | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2595134.777381559
num_examples: 5223
- name: validation
num_bytes: 312612.39847328246
num_examples: 593
download_size: 1444820
dataset_size: 2907747.1758548412
---
# Dataset Card for "auto_cot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.04974365234375,
-0.0175018310546875,
0.01297760009765625,
0.0220184326171875,
-0.017547607421875,
0.0133209228515625,
0.02166748046875,
-0.0172882080078125,
0.048980712890625,
0.032073974609375,
-0.0498046875,
-0.05645751953125,
-0.0472412109375,
-0.00905... |
jjzha/sayfullina | 2023-09-07T12:13:23.000Z | [
"language:en",
"license:unknown",
"region:us"
] | jjzha | null | null | 0 | 15 | 2023-07-04T13:42:41 | ---
license: unknown
language: en
---
This is the soft-skill dataset created by:
```
@inproceedings{sayfullina2018learning,
title={Learning representations for soft skill matching},
author={Sayfullina, Luiza and Malmi, Eric and Kannala, Juho},
booktitle={Analysis of Images, Social Networks and Texts: 7th International Conference, AIST 2018, Moscow, Russia, July 5--7, 2018, Revised Selected Papers 7},
pages={141--152},
year={2018},
organization={Springer}
}
```
There are no document delimiters. Data is split by user `jjzha`.
Number of samples (sentences):
- train: 3705
- dev: 1855
- test: 1851
Sources:
- Adzuna (UK)
Type of tags:
- B-SOFT
- I-SOFT
- O
Sample:
```
{
"idx": 1853,
"tokens": ["and", "sensitive", "when", "deal", "with", "customer", "be", "enthusiastic", "always", "eager", "to", "learn", "and", "develop", "knowledge", "and", "skill"],
"tags_skill": ["O", "O", "O", "O", "O", "O", "O", "B-SOFT", "I-SOFT", "I-SOFT", "I-SOFT", "I-SOFT", "O", "O", "O", "O", "O"]
}
``` | 1,015 | [
[
0.000507354736328125,
-0.0199432373046875,
0.01262664794921875,
-0.01004791259765625,
-0.019256591796875,
-0.0014142990112304688,
-0.033905029296875,
-0.00844573974609375,
0.007747650146484375,
0.0278472900390625,
-0.052490234375,
-0.0687255859375,
-0.0419921875... |
carbon225/vndb_img | 2023-07-04T14:46:14.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:odbl",
"art",
"not-for-all-audiences",
"anime",
"visual-novel",
"nsfw",
"vndb",
"region:us"
] | carbon225 | null | null | 0 | 15 | 2023-07-04T14:12:10 | ---
license: odbl
task_categories:
- image-classification
tags:
- art
- not-for-all-audiences
- anime
- visual-novel
- nsfw
- vndb
size_categories:
- 100K<n<1M
---
# Dataset Card for VNDB IMG
## Dataset Description
This is a 🤗 Datasets loader for the [vndb.org](https://vndb.org) image database dump.
It contains anime-style images flagged by users according to these categories:
* sexual content: safe/suggestive/explicit
* violence: tame/violent/brutal
## Loading Instructions
For licensing and "moral" reasons, the database dump has to be downloaded manually.
Please download the vndb.org database dump manually from <https://vndb.org/d14>.
Download the "Near-complete database" `vndb-db-latest.tar.zst` file.
Use `rsync` to download the 'Images' collection.
Create the following directory structure:
```
my/dataset/path
├── db
│ └── vndb-db-latest.tar.zst
└── vndb-img # this is the directory you downloaded with rsync
├── ch
├── cv
├── sf
├── st
└── ...
```
Inside `my/dataset/path/db` run
```
zstd -d vndb-db-latest.tar.zst
```
and
```
tar -xf vndb-db-latest.tar
```
The final directory structure should look like this:
```
my/dataset/path
├── db
│ ├── vndb-db-latest.tar
│ ├── vndb-db-latest.tar.zst
│ ├── db
│ └── ...
└── vndb-img
├── ch
├── cv
├── sf
├── st
└── ...
```
Finally, load the dataset
```python
datasets.load_dataset('carbon225/vndb_img', data_dir='my/dataset/path')
```
## Dataset Structure
The following fields are provided:
```python
{
'index': datasets.Value('int32'),
'id': datasets.Value('string'),
'width': datasets.Value('int32'),
'height': datasets.Value('int32'),
'c_votecount': datasets.Value('int32'),
'c_sexual_avg': datasets.Value('int32'),
'c_sexual_stddev': datasets.Value('int32'),
'c_violence_avg': datasets.Value('int32'),
'c_violence_stddev': datasets.Value('int32'),
'c_weight': datasets.Value('int32'),
'type': datasets.ClassLabel(names=['character', 'cover', 'screenshot_full', 'screenshot_thumb']),
'sexual_class': datasets.ClassLabel(names=['safe', 'suggestive', 'explicit']),
'violence_class': datasets.ClassLabel(names=['tame', 'violent', 'brutal']),
'file_name': datasets.Value('string'),
'full_path': datasets.Value('string'),
'image': datasets.Image(),
}
```
## Supported Tasks
With a few modifications the data can be used for:
* image classification of NSFW material
* image generation/super-resolution/...
* ...
## Considerations for Using the Data
The images are ***hardcore***, to say the least. I recommend not looking.
## Licensing Information
Using this dataset requires the user to download data manually from vndb.org.
All information on VNDB is made available under the Open Database License.
Any rights in individual contents of the database are licensed under the Database Contents License.
With the following exceptions:
* Anime data is obtained from the AniDB.net UDP API and is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0.
* Images, visual novel descriptions and character descriptions are gathered from various online sources and may be subject to separate license conditions. | 3,217 | [
[
-0.0499267578125,
-0.036956787109375,
0.0027179718017578125,
0.017364501953125,
-0.04254150390625,
-0.00736236572265625,
0.004093170166015625,
-0.0216064453125,
0.02593994140625,
0.04156494140625,
-0.04736328125,
-0.06512451171875,
-0.030426025390625,
0.0168... |
rcds/MultiLegalNeg | 2023-10-25T17:59:53.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"license:cc-by-nd-4.0",
"legal",
"arxiv:2306.02069",
"arxiv:2309.08695",
"region:us"
] | rcds | null | null | 0 | 15 | 2023-07-10T16:16:08 | ---
license: cc-by-nd-4.0
viewer: true
task_categories:
- token-classification
tags:
- legal
pretty_name: Multilingual Negation Scope Resolution
size_categories:
- 1K<n<10K
---
# Dataset Card for MultiLegalNeg
### Dataset Summary
This dataset consists of German, French, and Italian court documents annotated for negation cues and negation scopes. It also includes a reformated version of ConanDoyle-neg ([
Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)), SFU Review ([Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf)), BioScope ([Szarvas et al. 2008](https://aclanthology.org/W08-0606/)) and Dalloux ([Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28)).
### Languages
| Language | Subset | Number of sentences | Negated sentences |
|----------------------|-----------------|----------------------|-------------------|
| French | **fr** | 1059 | 382 |
| Italian | **it** | 1001 | 418 |
| German(Germany) | **de(DE)** | 1068 | 1098 |
| German (Switzerland) | **de(CH)** | 206 | 208 |
| English | **SFU Review** | 17672 | 3528 |
| English | **BioScope** | 14700 | 2095 |
| English | **ConanDoyle-neg**| 5714 | 5714 |
| French | **Dalloux** | 11032 | 1817 |
## Dataset Structure
### Data Fields
- text (string): full sentence
- spans (list): list of annotated cues and scopes
- start (int): offset of the beginning of the annotation
- end (int): offset of the end of the annotation
- token_start(int): id of the first token in the annotation
- token_end(int): id of the last token in the annotation
- label (string): CUE or SCOPE
- tokens (list): list of tokens in the sentence
- text (string): token text
- start (int): offset of the first character
- end (int): offset of the last character
- id (int): token id
- ws (boolean): indicates if the token is followed by a white space
### Data Splits
For each subset a train (70%), test (20%), and validation (10%) split is available.
#### How to use this dataset
To load all data use ```'all_all'```, or specify which dataset to load as the second argument. The available configurations are
```'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'```
```
from datasets import load_dataset
dataset = load_dataset("rcds/MultiLegalNeg", "all_all")
dataset
```
```
DatasetDict({
train: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 26440
})
test: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 7593
})
validation: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 4053
})
})
```
### Source Data
| Subset | Source |
|-------------------|----------------------|
| **fr** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **it** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **de(DE)** | [Glaser et al. 2021](https://www.scitepress.org/Link.aspx?doi=10.5220/0010246308120821) |
| **de(CH)** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/) |
| **SFU Review** | [Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf) |
| **BioScope** | [Szarvas et al. 2008](https://aclanthology.org/W08-0606/) |
| **ConanDoyle-neg**| [Morante and Blanco. 2012](https://aclanthology.org/S12-1035/) |
| **Dalloux** | [Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28) |
### Annotations
The data is annotated for negation cues and their scopes. Annotation guidelines are available [here](https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data/blob/main/Annotation_Guidelines.pdf)
#### Annotation process
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
### Citation Information
Please cite the following preprint:
```
@misc{christen2023resolving,
title={Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents},
author={Ramona Christen and Anastassia Shaitarova and Matthias Stürmer and Joel Niklaus},
year={2023},
eprint={2309.08695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 4,872 | [
[
-0.03924560546875,
-0.054290771484375,
0.0220794677734375,
0.0063934326171875,
-0.0211181640625,
-0.02435302734375,
-0.0191192626953125,
-0.032989501953125,
0.047576904296875,
0.03131103515625,
-0.037445068359375,
-0.0634765625,
-0.051605224609375,
0.0328674... |
TrainingDataPro/body-measurements-dataset | 2023-09-14T16:57:44.000Z | [
"task_categories:image-classification",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of a compilation of people's photos along with their
corresponding body measurements. It is designed to provide information and
insights into the physical appearances and body characteristics of individuals.
The dataset includes a diverse range of subjects representing different age
groups, genders, and ethnicities.
The photos are captured in a standardized manner, depicting individuals in a
front and side positions.
The images aim to capture the subjects' physical appearance using appropriate
lighting and angles that showcase their body proportions accurately.
The dataset serves various purposes, including:
- research projects
- body measurement analysis
- fashion or apparel industry applications
- fitness and wellness studies
- anthropometric studies for ergonomic design in various fields | @InProceedings{huggingface:dataset,
title = {body-measurements-dataset},
author = {TrainingDataPro},
year = {2023}
} | 2 | 15 | 2023-07-10T20:22:26 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
tags:
- code
dataset_info:
features:
- name: front_img
dtype: image
- name: selfie_img
dtype: image
- name: side_img
dtype: image
- name: arm_circumference_cm
dtype: string
- name: arm_length_cm
dtype: string
- name: back_build_cm
dtype: string
- name: calf_circumference_cm
dtype: string
- name: chest_circumference_cm
dtype: string
- name: crotch_height_cm
dtype: string
- name: front_build_cm
dtype: string
- name: hips_circumference_cm
dtype: string
- name: leg_length_cm
dtype: string
- name: neck_circumference_cm
dtype: string
- name: neck_pelvis_length_front_cm
dtype: string
- name: neck_waist_length_back_cm
dtype: string
- name: neck_waist_length_front_cm
dtype: string
- name: pelvis_circumference_cm
dtype: string
- name: shoulder_length_cm
dtype: string
- name: shoulder_width_cm
dtype: string
- name: thigh_circumference_cm
dtype: string
- name: under_chest_circumference_cm
dtype: string
- name: upper_arm_length_cm
dtype: string
- name: waist_circumference_cm
dtype: string
- name: height
dtype: string
- name: weight
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: race
dtype: string
- name: profession
dtype: string
- name: arm_circumference
dtype: image
- name: arm_length
dtype: image
- name: back_build
dtype: image
- name: calf_circumference
dtype: image
- name: chest_circumference
dtype: image
- name: crotch_height
dtype: image
- name: front_build
dtype: image
- name: hips_circumference
dtype: image
- name: leg_length
dtype: image
- name: neck_circumference
dtype: image
- name: neck_pelvis_length_front
dtype: image
- name: neck_waist_length_back
dtype: image
- name: neck_waist_length_front
dtype: image
- name: pelvis_circumference
dtype: image
- name: shoulder_length
dtype: image
- name: shoulder_width
dtype: image
- name: thigh_circumference
dtype: image
- name: under_chest_circumference
dtype: image
- name: upper_arm_length
dtype: image
- name: waist_circumference
dtype: image
splits:
- name: train
num_bytes: 86120
num_examples: 21
download_size: 68560913
dataset_size: 86120
---
# Body Measurements Dataset
The dataset consists of a compilation of people's photos along with their corresponding body measurements. It is designed to provide information and insights into the physical appearances and body characteristics of individuals.
The dataset includes a diverse range of subjects representing different **age groups, genders, and ethnicities**.
The photos are captured in a standardized manner, depicting individuals in a **front** and **side positions**.
The images aim to capture the subjects' physical appearance using appropriate *lighting and angles* that showcase their body proportions accurately.
The dataset serves various purposes, including:
- research projects
- body measurement analysis
- fashion or apparel industry applications
- fitness and wellness studies
- anthropometric studies for ergonomic design in various fields
.png?generation=1688983133539816&alt=media)
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=body-measurements-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
### Folders
- **files**: includes folders with photos and measurements of people
- **proofs**: contains subfolders, corresponding to the original photos in `files` folder and includes additional photos of people taking measurements
- **.pdf** file: includes information about photos in `proofs` folder
### "Files" folder includes 3 images of a person and json file with measurements:
- **selfie** - the person is looking to the camera; face, neck and shoulders are clearly seen,
- **front photo** - the person stands in front of the camera, all body parts are clearly seen,
- **side photo** - the person turned sideways to the camera, all body parts are clearly seen
- **json file** - includes 22 measurements (*weight, height, hips circumference, leg length etc.*) and 4 additional characteristics (**age, gender, race, profession**) of a person, depicted in photos in the subfolder
### File with the extension .csv
includes the following information for each media file:
- **selfie**: link to the selfie,
- **front**: link to the front photo,
- **side**: link to the side photo,
- **measurements**: link to the json file with measurements
# Body Measurements might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=body-measurements-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 5,477 | [
[
-0.03265380859375,
-0.0158538818359375,
0.00948333740234375,
0.00534820556640625,
-0.0288848876953125,
-0.005580902099609375,
0.0084228515625,
-0.032684326171875,
0.04168701171875,
0.038909912109375,
-0.061065673828125,
-0.06005859375,
-0.0273284912109375,
0... |
TinyPixel/lima_1 | 2023-07-11T17:11:00.000Z | [
"region:us"
] | TinyPixel | null | null | 0 | 15 | 2023-07-11T17:04:29 | ---
dataset_info:
features:
- name: human
dtype: string
- name: gpt
dtype: string
splits:
- name: train
num_bytes: 2887450
num_examples: 1030
download_size: 1701721
dataset_size: 2887450
---
# Dataset Card for "lima_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 381 | [
[
-0.0382080078125,
-0.025726318359375,
0.018829345703125,
0.041961669921875,
-0.039947509765625,
-0.0205230712890625,
0.038116455078125,
-0.006084442138671875,
0.07421875,
0.04339599609375,
-0.06536865234375,
-0.06414794921875,
-0.0626220703125,
-0.0152740478... |
Delius/ChineseWebNovel | 2023-07-14T07:30:07.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"region:us"
] | Delius | null | null | 6 | 15 | 2023-07-12T10:47:19 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
size_categories:
- 1K<n<10K
---
Chinese Web Novel Dataset
Summarized by claude but converted the order for novel text extension task.
WARNING!! Please be aware of the context length!!! | 261 | [
[
-0.01216888427734375,
-0.046234130859375,
0.00814056396484375,
0.0452880859375,
-0.0479736328125,
-0.06427001953125,
-0.0118255615234375,
-0.03717041015625,
0.0101470947265625,
0.056976318359375,
-0.03448486328125,
-0.015838623046875,
-0.039398193359375,
-0.... |
shlomihod/civil-comments-wilds | 2023-07-28T17:27:14.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc0-1.0",
"toxicity",
"arxiv:2012.07421",
"arxiv:1903.04561",
"arxiv:1808.07231",
"arxiv:1911.08731",
"arxiv:2211.09110",
"region:us"
] | shlomihod | In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. | @inproceedings{wilds2021,
title = {{WILDS}: A Benchmark of in-the-Wild Distribution Shifts},
author = {Pang Wei Koh and Shiori Sagawa and Henrik Marklund and Sang Michael Xie and Marvin Zhang and
Akshay Balsubramani and Weihua Hu and Michihiro Yasunaga and Richard Lanas Phillips and Irena Gao and
Tony Lee and Etienne David and Ian Stavness and Wei Guo and Berton A. Earnshaw and Imran S. Haque and
Sara Beery and Jure Leskovec and Anshul Kundaje and Emma Pierson and Sergey Levine and Chelsea Finn
and Percy Liang},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
@inproceedings{borkan2019nuanced,
title={Nuanced metrics for measuring unintended bias with real data for text classification},
author={Borkan, Daniel and Dixon, Lucas and Sorensen, Jeffrey and Thain, Nithum and Vasserman, Lucy},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={491--500},
year={2019}
}
@article{DBLP:journals/corr/abs-2211-09110,
author = {Percy Liang and
Rishi Bommasani and
Tony Lee and
Dimitris Tsipras and
Dilara Soylu and
Michihiro Yasunaga and
Yian Zhang and
Deepak Narayanan and
Yuhuai Wu and
Ananya Kumar and
Benjamin Newman and
Binhang Yuan and
Bobby Yan and
Ce Zhang and
Christian Cosgrove and
Christopher D. Manning and
Christopher R{\'{e}} and
Diana Acosta{-}Navas and
Drew A. Hudson and
Eric Zelikman and
Esin Durmus and
Faisal Ladhak and
Frieda Rong and
Hongyu Ren and
Huaxiu Yao and
Jue Wang and
Keshav Santhanam and
Laurel J. Orr and
Lucia Zheng and
Mert Y{\"{u}}ksekg{\"{o}}n{\"{u}}l and
Mirac Suzgun and
Nathan Kim and
Neel Guha and
Niladri S. Chatterji and
Omar Khattab and
Peter Henderson and
Qian Huang and
Ryan Chi and
Sang Michael Xie and
Shibani Santurkar and
Surya Ganguli and
Tatsunori Hashimoto and
Thomas Icard and
Tianyi Zhang and
Vishrav Chaudhary and
William Wang and
Xuechen Li and
Yifan Mai and
Yuhui Zhang and
Yuta Koreeda},
title = {Holistic Evaluation of Language Models},
journal = {CoRR},
volume = {abs/2211.09110},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2211.09110},
doi = {10.48550/arXiv.2211.09110},
eprinttype = {arXiv},
eprint = {2211.09110},
timestamp = {Wed, 23 Nov 2022 18:03:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2211-09110.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 15 | 2023-07-23T03:43:31 | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- toxicity
pretty_name: CivilComments WILDS
size_categories:
- 100K<n<1M
---
# Dataset Card for CivilComments WILDS
## Dataset Description
- **Homepage:** https://wilds.stanford.edu/datasets/#civilcomments
- **Repository:**
- **Paper:** https://arxiv.org/abs/2012.07421 | https://arxiv.org/abs/1903.04561
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary

Automatic review of user-generated text—e.g., detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet. Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics ([Park et al., 2018](https://arxiv.org/abs/1808.07231); [Dixon et al., 2018](https://research.google/pubs/pub46743/)). These types of spurious correlations can significantly degrade model performance on particular subpopulations ([Sagawa et al.,2020](https://arxiv.org/abs/1911.08731)).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is in the public domain and is distributed under [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
@inproceedings{wilds2021,
title = {{WILDS}: A Benchmark of in-the-Wild Distribution Shifts},
author = {Pang Wei Koh and Shiori Sagawa and Henrik Marklund and Sang Michael Xie and Marvin Zhang and
Akshay Balsubramani and Weihua Hu and Michihiro Yasunaga and Richard Lanas Phillips and Irena Gao and
Tony Lee and Etienne David and Ian Stavness and Wei Guo and Berton A. Earnshaw and Imran S. Haque and
Sara Beery and Jure Leskovec and Anshul Kundaje and Emma Pierson and Sergey Levine and Chelsea Finn
and Percy Liang},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
@inproceedings{borkan2019nuanced,
title={Nuanced metrics for measuring unintended bias with real data for text classification},
author={Borkan, Daniel and Dixon, Lucas and Sorensen, Jeffrey and Thain, Nithum and Vasserman, Lucy},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={491--500},
year={2019}
}
@article{DBLP:journals/corr/abs-2211-09110,
author = {Percy Liang and
Rishi Bommasani and
Tony Lee and
Dimitris Tsipras and
Dilara Soylu and
Michihiro Yasunaga and
Yian Zhang and
Deepak Narayanan and
Yuhuai Wu and
Ananya Kumar and
Benjamin Newman and
Binhang Yuan and
Bobby Yan and
Ce Zhang and
Christian Cosgrove and
Christopher D. Manning and
Christopher R{\'{e}} and
Diana Acosta{-}Navas and
Drew A. Hudson and
Eric Zelikman and
Esin Durmus and
Faisal Ladhak and
Frieda Rong and
Hongyu Ren and
Huaxiu Yao and
Jue Wang and
Keshav Santhanam and
Laurel J. Orr and
Lucia Zheng and
Mert Y{\"{u}}ksekg{\"{o}}n{\"{u}}l and
Mirac Suzgun and
Nathan Kim and
Neel Guha and
Niladri S. Chatterji and
Omar Khattab and
Peter Henderson and
Qian Huang and
Ryan Chi and
Sang Michael Xie and
Shibani Santurkar and
Surya Ganguli and
Tatsunori Hashimoto and
Thomas Icard and
Tianyi Zhang and
Vishrav Chaudhary and
William Wang and
Xuechen Li and
Yifan Mai and
Yuhui Zhang and
Yuta Koreeda},
title = {Holistic Evaluation of Language Models},
journal = {CoRR},
volume = {abs/2211.09110},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2211.09110},
doi = {10.48550/arXiv.2211.09110},
eprinttype = {arXiv},
eprint = {2211.09110},
timestamp = {Wed, 23 Nov 2022 18:03:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2211-09110.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
### Contributions
[More Information Needed] | 5,710 | [
[
-0.03466796875,
-0.03521728515625,
0.01251220703125,
0.0228118896484375,
-0.016693115234375,
-0.0108184814453125,
-0.0250244140625,
-0.041778564453125,
0.00919342041015625,
0.026214599609375,
-0.034210205078125,
-0.0567626953125,
-0.04638671875,
0.0158843994... |
Ali-C137/Hindawi-Books-dataset | 2023-08-03T20:05:42.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ar",
"license:cc-by-nc-4.0",
"region:us"
] | Ali-C137 | null | null | 4 | 15 | 2023-07-29T16:12:01 | ---
dataset_info:
features:
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: AuthorName
dtype: string
- name: AboutBook
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: ChapterText
dtype: string
- name: AboutAuthor
dtype: string
splits:
- name: train
num_bytes: 1364861563
num_examples: 49821
download_size: 494678002
dataset_size: 1364861563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-4.0
task_categories:
- text-generation
- summarization
language:
- ar
pretty_name: Hindawi
size_categories:
- 10K<n<100K
---
# Dataset Card for "Hindawi Books Dataset"
**Hindawi Books Dataset is a large collection of more than 3000 books written in Modern Standard Arabic.**
## Dataset Description
Hindawi Books Dataset offers a rich and diverse collection of literary works, covering various topics and genres, all written in Modern Standard Arabic. The dataset includes information about each book, such as the title, author name, book abstract, and a link to access the complete text online. Additionally, the dataset contains chapter details, including the chapter name and text, providing insights into the content of each book.
## Dataset Details
- **Homepage:** [https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset](https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset)
- **Author:** Elfilali Ali
- **Email:** ali.elfilali00@gmail.com, alielfilali0909@gmail.com
- **GitHub Profile:** [https://github.com/alielfilali01](https://github.com/alielfilali01)
- **LinkedIn Profile:** [https://www.linkedin.com/in/alielfilali01/](https://www.linkedin.com/in/alielfilali01/)
## Dataset Size
The Hindawi Books Dataset contains over 3000 books, making it a substantial resource for research and NLP model development. The dataset size on disk is approximately 476 MB, and it comprises more than 120 million tokens.
## Potential Use Cases
Researchers and NLP enthusiasts can utilize the Hindawi Books Dataset for various applications, including:
- **Language Model Training:** The dataset is ideal for training Large Language Models (LLMs) specifically tailored to Arabic text.
- **Text Generation:** Developers can leverage the dataset to generate new stories, poems, or other literary works in Modern Standard Arabic.
- **Text Summarization:** Researchers can explore long text summarization tasks by using the book text and abstract as targets.
## Dataset Access
The Hindawi Books Dataset is publicly available for academic and non-commercial research purposes only. [Hindawi Foundation](https://www.hindawi.org/) has granted permission to scrape and publish the data as a dataset on HuggingFace Hub for non-commercial and research only use.
We kindly request users to respect copyright and intellectual property rights and acknowledge Hindawi Foundation's contribution in any research or academic publications utilizing this dataset. Users are also reminded not to distribute the dataset to any third parties.
## Citation
Please use the following citation when referencing the Hindawi Books Dataset:
```
@dataset{
title = {Hindawi Books Dataset},
author = {Elfilali Ali},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset},
year = {2023},
}
```
## Feedback and Discussion
We encourage researchers and users of the Hindawi Books Dataset to provide feedback, report any potential mistakes, or discuss concerns and risks related to the dataset in the discussion window on the dataset card in the Hugging Face Hub. Your valuable feedback will help us improve and enhance the dataset for the NLP community.
####
| 3,769 | [
[
-0.031494140625,
-0.00807952880859375,
-0.0106353759765625,
0.0238037109375,
-0.018310546875,
-0.00923919677734375,
-0.006404876708984375,
-0.050811767578125,
0.015533447265625,
0.034088134765625,
-0.0650634765625,
-0.0626220703125,
-0.0360107421875,
0.02879... |
kaxap/pg-gpt4SQL-sql-instructions-1k | 2023-07-30T01:33:39.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | kaxap | null | null | 2 | 15 | 2023-07-30T01:25:53 | ---
license: cc-by-nc-4.0
---
The dataset is consructed by taking firsst 1000 rows of the train split of [pg-wikiSQL](https://huggingface.co/datasets/kaxap/pg-wikiSQL) dataset and asking GPT-4 to transform the query and the question to be more complex using various aggregate functions.
Resulting SQL statements were adapted for Postgres syntax and conventions.
Each SQL statement, including `CREATE TABLE` statements were syntax checked with [pgsanity](https://github.com/markdrago/pgsanity).
The `total_tokens` column indicates the OpenAI API usage for the datapoint generation. | 584 | [
[
-0.04766845703125,
-0.04742431640625,
0.0282745361328125,
-0.006595611572265625,
-0.0232086181640625,
-0.029815673828125,
0.01389312744140625,
-0.0084075927734375,
0.025848388671875,
0.044464111328125,
-0.042572021484375,
-0.02886962890625,
-0.0283050537109375,
... |
qmeeus/slurp | 2023-08-01T11:27:35.000Z | [
"region:us"
] | qmeeus | null | null | 1 | 15 | 2023-08-01T11:14:25 | ---
dataset_info:
features:
- name: slurp_id
dtype: int64
- name: sentence
dtype: string
- name: annotation
dtype: string
- name: intent
dtype:
class_label:
names:
'0': addcontact
'1': alarm_query
'2': alarm_remove
'3': alarm_set
'4': audio_volume_down
'5': audio_volume_mute
'6': audio_volume_other
'7': audio_volume_up
'8': calendar_query
'9': calendar_remove
'10': calendar_set
'11': cleaning
'12': coffee
'13': convert
'14': cooking_query
'15': cooking_recipe
'16': createoradd
'17': currency
'18': datetime_convert
'19': datetime_query
'20': definition
'21': email_addcontact
'22': email_query
'23': email_querycontact
'24': email_sendemail
'25': events
'26': factoid
'27': game
'28': general_affirm
'29': general_commandstop
'30': general_confirm
'31': general_dontcare
'32': general_explain
'33': general_greet
'34': general_joke
'35': general_negate
'36': general_praise
'37': general_quirky
'38': general_repeat
'39': greet
'40': hue_lightdim
'41': hue_lightoff
'42': hue_lightup
'43': iot_cleaning
'44': iot_coffee
'45': iot_hue_lightchange
'46': iot_hue_lightdim
'47': iot_hue_lightoff
'48': iot_hue_lighton
'49': iot_hue_lightup
'50': iot_wemo_off
'51': iot_wemo_on
'52': joke
'53': likeness
'54': lists_createoradd
'55': lists_query
'56': lists_remove
'57': locations
'58': music
'59': music_dislikeness
'60': music_likeness
'61': music_query
'62': music_settings
'63': news_query
'64': play_audiobook
'65': play_game
'66': play_music
'67': play_podcasts
'68': play_radio
'69': podcasts
'70': post
'71': qa_currency
'72': qa_definition
'73': qa_factoid
'74': qa_maths
'75': qa_stock
'76': query
'77': querycontact
'78': quirky
'79': radio
'80': recommendation_events
'81': recommendation_locations
'82': recommendation_movies
'83': remove
'84': sendemail
'85': set
'86': settings
'87': social_post
'88': social_query
'89': takeaway_order
'90': takeaway_query
'91': ticket
'92': traffic
'93': transport_query
'94': transport_taxi
'95': transport_ticket
'96': transport_traffic
'97': volume_other
'98': weather_query
'99': wemo_off
'100': wemo_on
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 2920956911.136
num_examples: 50628
- name: devel
num_bytes: 477355969.9
num_examples: 8690
- name: test
num_bytes: 709706969.726
num_examples: 13078
- name: train_synthetic
num_bytes: 2571103452.542
num_examples: 69253
download_size: 6753580307
dataset_size: 6679123303.304
---
# Dataset Card for "slurp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,657 | [
[
-0.0318603515625,
-0.018707275390625,
0.01274871826171875,
0.00569915771484375,
-0.019195556640625,
0.004268646240234375,
0.0192718505859375,
-0.0216522216796875,
0.07086181640625,
0.044708251953125,
-0.049652099609375,
-0.043853759765625,
-0.056640625,
-0.0... |
DynamicSuperb/NoiseSNRLevelPrediction_VCTK_MUSAN-Gaussian | 2023-11-02T09:16:16.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 15 | 2023-08-11T09:13:37 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 13813232997
num_examples: 26865
download_size: 3420927873
dataset_size: 13813232997
---
# Dataset Card for "NoiseSNRLevelPredictiongaussian_VCTKMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.0301361083984375,
-0.0178985595703125,
0.00579071044921875,
0.036529541015625,
-0.01983642578125,
-0.006320953369140625,
0.012542724609375,
-0.01003265380859375,
0.04327392578125,
0.024993896484375,
-0.0723876953125,
-0.064453125,
-0.0474853515625,
-0.028... |
bdpc/rvl_cdip_n_mp | 2023-08-11T09:58:24.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | bdpc | The RVL-CDIP-N (Ryerson Vision Lab Complex Document Information Processing) dataset consists of newly gathered documents in 16 classes
There are 991 documents for testing purposes. There were 10 documents from the original dataset that could not be retrieved based on the metadata or were out-of-scope (language). | @inproceedings{larson2022evaluating,
title={Evaluating Out-of-Distribution Performance on Document Image Classifiers},
author={Larson, Stefan and Lim, Gordon and Ai, Yutong and Kuang, David and Leach, Kevin},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022}
}
@inproceedings{bdpc,
title = {Beyond Document Page Classification},
author = {Anonymous},
booktitle = {Under Review},
year = {2023}
} | 0 | 15 | 2023-08-11T09:24:28 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: id
dtype: string
- name: file
dtype: binary
- name: labels
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: test
num_bytes: 1349159996
num_examples: 991
download_size: 0
dataset_size: 1349159996
---
# Dataset Card for RVL-CDIP-N_MultiPage
## Extension
The data loader provides support for loading RVL_CDIP-N in its extended multipage format.
Big kudos to the original authors (first in CITATION) for collecting the RVL-CDIP-N dataset.
We stand on the shoulders of giants :)
## Required installation
```bash
pip3 install pypdf2 pdf2image
sudo apt-get install poppler-utils
``` | 1,107 | [
[
-0.0599365234375,
-0.007373809814453125,
-0.004573822021484375,
0.041900634765625,
-0.015899658203125,
0.00319671630859375,
0.0113677978515625,
0.008148193359375,
0.01052093505859375,
0.05126953125,
-0.0283203125,
-0.01486968994140625,
-0.049224853515625,
0.... |
augtoma/usmle_step_2 | 2023-08-11T21:25:09.000Z | [
"region:us"
] | augtoma | null | null | 0 | 15 | 2023-08-11T21:24:57 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 133267
num_examples: 109
download_size: 80679
dataset_size: 133267
---
# Dataset Card for "usmle_self_eval_step2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 787 | [
[
-0.0175933837890625,
-0.0209808349609375,
0.021514892578125,
0.020111083984375,
-0.0120086669921875,
0.007080078125,
0.034759521484375,
0.0037078857421875,
0.0298004150390625,
0.03619384765625,
-0.051025390625,
-0.052764892578125,
-0.033599853515625,
-0.0075... |
FreedomIntelligence/sharegpt-arabic | 2023-08-13T15:46:24.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 1 | 15 | 2023-08-13T08:58:29 | ---
license: apache-2.0
---
Arabic ShareGPT data translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 204 | [
[
-0.04864501953125,
-0.02838134765625,
0.015899658203125,
0.0277557373046875,
-0.0260009765625,
0.00905609130859375,
-0.00864410400390625,
-0.0357666015625,
0.006023406982421875,
0.0078582763671875,
-0.05096435546875,
-0.0445556640625,
-0.054412841796875,
0.0... |
Pretam/hi-kn | 2023-08-17T17:36:26.000Z | [
"region:us"
] | Pretam | null | null | 0 | 15 | 2023-08-17T12:56:03 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
asas-ai/okapi_ar_arc | 2023-08-22T17:23:44.000Z | [
"region:us"
] | asas-ai | null | null | 0 | 15 | 2023-08-19T16:44:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dim/linux_man_pages_tldr_summarized | 2023-08-31T19:56:32.000Z | [
"region:us"
] | dim | null | null | 0 | 15 | 2023-08-31T19:51:37 | ---
dataset_info:
features:
- name: Command
dtype: string
- name: Text
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 3006835
num_examples: 481
download_size: 1308915
dataset_size: 3006835
---
# Dataset Card for "linux_man_pages_tldr_summarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.04681396484375,
-0.0162811279296875,
0.0257568359375,
0.0027923583984375,
-0.022918701171875,
0.010406494140625,
0.007472991943359375,
0.00959014892578125,
0.0611572265625,
0.03143310546875,
-0.048736572265625,
-0.05682373046875,
-0.027099609375,
-0.01388... |
yangwang825/audioset | 2023-09-18T11:19:55.000Z | [
"task_categories:audio-classification",
"size_categories:100M<n<1B",
"audioset",
"region:us"
] | yangwang825 | null | null | 0 | 15 | 2023-09-02T12:56:33 | ---
configs:
- config_name: audioset500k
data_files:
- split: train
path: audioset500k.json
- config_name: balanced_train
data_files:
- split: train
path: balanced_train.json
- config_name: eval
data_files:
- split: test
path: eval.json
- config_name: unbalanced_train_part00
data_files: unbalanced_train_part00.json
# dataset_size: 46940
- config_name: unbalanced_train_part01
data_files: unbalanced_train_part01.json
# dataset_size: 47052
- config_name: unbalanced_train_part02
data_files: unbalanced_train_part02.json
# dataset_size: 46923
- config_name: unbalanced_train_part03
data_files: unbalanced_train_part03.json
# dataset_size: 46952
- config_name: unbalanced_train_part04
data_files: unbalanced_train_part04.json
# dataset_size: 46916
- config_name: unbalanced_train_part05
data_files: unbalanced_train_part05.json
# dataset_size: 47011
- config_name: unbalanced_train_part06
data_files: unbalanced_train_part06.json
# dataset_size: 46964
- config_name: unbalanced_train_part07
data_files: unbalanced_train_part07.json
# dataset_size: 46915
- config_name: unbalanced_train_part08
data_files: unbalanced_train_part08.json
# dataset_size: 46927
- config_name: unbalanced_train_part09
data_files: unbalanced_train_part09.json
# dataset_size: 46839
- config_name: unbalanced_train_part10
data_files: unbalanced_train_part10.json
# dataset_size: 46862
- config_name: unbalanced_train_part11
data_files: unbalanced_train_part11.json
# dataset_size: 46836
- config_name: unbalanced_train_part12
data_files: unbalanced_train_part12.json
# dataset_size: 46865
- config_name: unbalanced_train_part13
data_files: unbalanced_train_part13.json
# dataset_size: 46800
- config_name: unbalanced_train_part14
data_files: unbalanced_train_part14.json
# dataset_size: 46837
- config_name: unbalanced_train_part15
data_files: unbalanced_train_part15.json
# dataset_size: 46824
- config_name: unbalanced_train_part16
data_files: unbalanced_train_part16.json
# dataset_size: 46813
- config_name: unbalanced_train_part17
data_files: unbalanced_train_part17.json
# dataset_size: 46771
- config_name: unbalanced_train_part18
data_files: unbalanced_train_part18.json
# dataset_size: 46875
- config_name: unbalanced_train_part19
data_files: unbalanced_train_part19.json
# dataset_size: 46885
- config_name: unbalanced_train_part20
data_files: unbalanced_train_part20.json
# dataset_size: 46884
- config_name: unbalanced_train_part21
data_files: unbalanced_train_part21.json
# dataset_size: 46736
- config_name: unbalanced_train_part22
data_files: unbalanced_train_part22.json
# dataset_size: 46832
- config_name: unbalanced_train_part23
data_files: unbalanced_train_part23.json
# dataset_size: 46823
- config_name: unbalanced_train_part24
data_files: unbalanced_train_part24.json
# dataset_size: 46795
- config_name: unbalanced_train_part25
data_files: unbalanced_train_part25.json
# dataset_size: 46740
- config_name: unbalanced_train_part26
data_files: unbalanced_train_part26.json
# dataset_size: 46765
- config_name: unbalanced_train_part27
data_files: unbalanced_train_part27.json
# dataset_size: 46708
- config_name: unbalanced_train_part28
data_files: unbalanced_train_part28.json
# dataset_size: 46736
- config_name: unbalanced_train_part29
data_files: unbalanced_train_part29.json
# dataset_size: 46819
- config_name: unbalanced_train_part30
data_files: unbalanced_train_part30.json
# dataset_size: 46694
- config_name: unbalanced_train_part31
data_files: unbalanced_train_part31.json
# dataset_size: 46735
- config_name: unbalanced_train_part32
data_files: unbalanced_train_part32.json
# dataset_size: 46731
- config_name: unbalanced_train_part33
data_files: unbalanced_train_part33.json
# dataset_size: 46627
- config_name: unbalanced_train_part34
data_files: unbalanced_train_part34.json
# dataset_size: 46740
- config_name: unbalanced_train_part35
data_files: unbalanced_train_part35.json
# dataset_size: 46866
- config_name: unbalanced_train_part36
data_files: unbalanced_train_part36.json
# dataset_size: 46758
- config_name: unbalanced_train_part37
data_files: unbalanced_train_part37.json
# dataset_size: 46751
- config_name: unbalanced_train_part38
data_files: unbalanced_train_part38.json
# dataset_size: 46750
- config_name: unbalanced_train_part39
data_files: unbalanced_train_part39.json
# dataset_size: 46700
- config_name: unbalanced_train_part40
data_files: unbalanced_train_part40.json
# dataset_size: 39137
task_categories:
- audio-classification
tags:
- audioset
size_categories:
- 100M<n<1B
---
# AudioSet
AudioSet<sup>[1]</sup> consists of an expanding ontology of 527 audio event classes and a collection of 2M human-labelled 10-second sound clips drawn from YouTube.
Some clips are missing on YouTube, so the number of files downloaded is different from time to time.
This repository contains 20550 / 22160 of the balanced train set, 1913637 / 2041789 of the unbalanced train set (separated into 41 parts), and 18887 / 20371 of the evaluation set.
The pre-process script can be found at qiuqiangkong's [github](https://github.com/qiuqiangkong/audioset_tagging_cnn)<sup>[2]</sup>.
To improve training efficiency, we add a slightly more balanced subset AudioSet500K<sup>[3]</sup>.
## References
1. Gemmeke, Jort F., et al., Audio set: An ontology and human-labeled dataset for audio events, 2017
2. Kong, Qiuqiang, et al., Panns: Large-scale pretrained audio neural networks for audio pattern recognition, 2020
3. Nagrani, Arsha, et al., Attention bottlenecks for multimodal fusion, 2021 | 5,697 | [
[
-0.045318603515625,
0.00859832763671875,
-0.004840850830078125,
0.023895263671875,
-0.0120849609375,
-0.0104827880859375,
-0.04083251953125,
-0.02734375,
0.023956298828125,
0.0269012451171875,
-0.07208251953125,
-0.0252227783203125,
-0.039703369140625,
-0.00... |
Trelis/touch-rugby-rules | 2023-09-30T13:16:06.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] | Trelis | null | null | 0 | 15 | 2023-09-12T10:55:36 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- touch rugby
size_categories:
- n<1K
---
# Touch Rugby Rules Dataset
train.csv is comprised of a set of questions based on rules from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
For educational and non-commercial use only. | 367 | [
[
-0.02447509765625,
-0.02044677734375,
-0.0151214599609375,
0.042633056640625,
-0.012481689453125,
-0.00554656982421875,
0.0037364959716796875,
-0.0236968994140625,
0.02093505859375,
0.054443359375,
-0.07330322265625,
-0.027252197265625,
-0.0165252685546875,
... |
InstaDeepAI/instanovo_ninespecies_exclude_yeast | 2023-09-15T13:16:02.000Z | [
"license:cc0-1.0",
"region:us"
] | InstaDeepAI | null | null | 1 | 15 | 2023-09-15T09:29:15 | ---
license: cc0-1.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sequence
dtype: string
- name: modified_sequence
dtype: string
- name: precursor_mz
dtype: float64
- name: precursor_charge
dtype: int64
- name: mz_array
sequence: float64
- name: intensity_array
sequence: float32
splits:
- name: train
num_bytes: 839098224
num_examples: 499402
- name: validation
num_bytes: 49792990
num_examples: 28572
- name: test
num_bytes: 45505134
num_examples: 27142
download_size: 1119691599
dataset_size: 934396348
---
# Dataset Card for Nine-Species excluding Yeast
Dataset used for the baseline comparison of InstaNovo to other models.
## Dataset Description
- **Repository:** [InstaNovo](https://github.com/instadeepai/InstaNovo)
- **Paper:** [De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments](https://www.biorxiv.org/content/10.1101/2023.08.30.555055v1)
### Dataset Summary
Dataset used in the original [DeepNovo](https://www.pnas.org/doi/full/10.1073/pnas.1705691114) paper.
- The training set contains 8 species excluding yeast
- The validation/test set contains the yeast species
## Dataset Structure
The dataset is tabular, where each row corresponds to a labelled MS2 spectra.
- `sequence (string)` \
The target peptide sequence excluding post-translational modifications
- `modified_sequence (string)` \
The target peptide sequence including post-translational modifications
- `precursor_mz (float64)` \
The mass-to-charge of the precursor (from MS1)
- `charge (int64)` \
The charge of the precursor (from MS1)
- `mz_array (list[float64])` \
The mass-to-charge values of the MS2 spectrum
- `mz_array (list[float32])` \
The intensity values of the MS2 spectrum
## Citation Information
If you use this dataset, please cite the original authors.
The original data is available on [MASSIVE](https://massive.ucsd.edu/ProteoSAFe/static/massive.jsp) with the identifier `MSV000081382`.
Please also cite InstaNovo:
```bibtex
@article{eloff_kalogeropoulos_2023_instanovo,
title = {De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments},
author = {Kevin Eloff and Konstantinos Kalogeropoulos and Oliver Morell and Amandla Mabona and Jakob Berg Jespersen and Wesley Williams and Sam van Beljouw and Marcin Skwark and Andreas Hougaard Laustsen and Stan J. J. Brouns and Anne Ljungars and Erwin Marten Schoof and Jeroen Van Goey and Ulrich auf dem Keller and Karim Beguir and Nicolas Lopez Carranza and Timothy Patrick Jenkins},
year = {2023},
doi = {10.1101/2023.08.30.555055},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/10.1101/2023.08.30.555055v1},
journal = {bioRxiv}
}
``` | 3,030 | [
[
-0.01074981689453125,
-0.00323486328125,
0.018768310546875,
0.00043463706970214844,
-0.0287933349609375,
0.0174713134765625,
-0.0037097930908203125,
-0.00988006591796875,
0.00981903076171875,
0.005565643310546875,
-0.0221099853515625,
-0.046630859375,
-0.0274658... |
bibidentuhanoi/BMO_BASE_TEXT | 2023-10-11T16:09:08.000Z | [
"region:us"
] | bibidentuhanoi | null | null | 0 | 15 | 2023-09-19T15:26:35 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 154049
num_examples: 278
download_size: 84465
dataset_size: 154049
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BMO_BASE_TEXT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 436 | [
[
-0.02032470703125,
-0.032989501953125,
0.0205535888671875,
0.0233306884765625,
-0.0275115966796875,
-0.00774383544921875,
0.00035858154296875,
-0.01241302490234375,
0.0323486328125,
0.047210693359375,
-0.051788330078125,
-0.054901123046875,
-0.0545654296875,
... |
Falah/new_photorealistic_prompts | 2023-09-20T07:37:34.000Z | [
"region:us"
] | Falah | null | null | 0 | 15 | 2023-09-20T07:37:33 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1492287
num_examples: 10000
download_size: 345550
dataset_size: 1492287
---
# Dataset Card for "new_photorealistic_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 371 | [
[
-0.052642822265625,
-0.031219482421875,
0.0249481201171875,
0.0172882080078125,
-0.0171661376953125,
0.0062103271484375,
0.0156402587890625,
-0.0052490234375,
0.05828857421875,
0.027862548828125,
-0.0762939453125,
-0.0537109375,
-0.0250701904296875,
-0.00456... |
dim/databricks_dolly_15k_en | 2023-09-20T15:47:41.000Z | [
"region:us"
] | dim | null | null | 0 | 15 | 2023-09-20T15:47:37 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 12195589
num_examples: 15011
download_size: 7749182
dataset_size: 12195589
---
# Dataset Card for "databricks-dolly-15k_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.0284271240234375,
-0.019287109375,
-0.0038909912109375,
0.045074462890625,
-0.024078369140625,
0.008148193359375,
0.036651611328125,
-0.004253387451171875,
0.058990478515625,
0.03302001953125,
-0.0655517578125,
-0.04730224609375,
-0.040740966796875,
0.003... |
spacemanidol/dset | 2023-09-26T19:09:18.000Z | [
"region:us"
] | spacemanidol | null | 0 | 15 | 2023-09-21T18:50:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... | |
goendalf666/sales-conversations | 2023-10-04T20:39:04.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"sales",
"arxiv:2306.11644",
"region:us"
] | goendalf666 | null | null | 4 | 15 | 2023-09-21T21:37:30 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: '0'
dtype: string
- name: '1'
dtype: string
- name: '2'
dtype: string
- name: '3'
dtype: string
- name: '4'
dtype: string
- name: '5'
dtype: string
- name: '6'
dtype: string
- name: '7'
dtype: string
- name: '8'
dtype: string
- name: '9'
dtype: string
- name: '10'
dtype: string
- name: '11'
dtype: string
- name: '12'
dtype: string
- name: '13'
dtype: string
- name: '14'
dtype: string
- name: '15'
dtype: string
- name: '16'
dtype: string
- name: '17'
dtype: string
- name: '18'
dtype: string
- name: '19'
dtype: string
splits:
- name: train
num_bytes: 6821725
num_examples: 3412
download_size: 2644154
dataset_size: 6821725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- sales
---
# Dataset Card for "sales-conversations"
This dataset was created for the purpose of training a sales agent chatbot that can convince people.
The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644
gpt-3.5-turbo was used for the generation
# Structure
The conversations have a customer and a salesman which appear always in changing order. customer, salesman, customer, salesman, etc.
The customer always starts the conversation
Who ends the conversation is not defined.
# Generation
Note that a textbook dataset is mandatory for this conversation generation. This examples rely on the following textbook dataset:
https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling
The data generation code can be found here: https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py
The following prompt was used to create a conversation
```
def create_random_prompt(chapter, roles=["Customer", "Salesman"], range_vals=(3, 7), industries=None):
if industries is None:
industries = ["tech", "health", "finance"] # default industries; replace with your default list if different
x = random.randint(*range_vals)
y = 0
for i in reversed(range(3, 9)): # Generalized loop for range of values
if i * x < 27:
y = i
break
conversation_structure = ""
for i in range(1, x+1):
conversation_structure += f"""
{roles[0]}: #{i}. sentence of {roles[0].lower()}
{roles[1]}: #{i}. sentence of {roles[1].lower()}"""
prompt = f"""Here is a chapter from a textbook about convincing people.
The purpose of this data is to use it to fine tune a llm.
Generate conversation examples that are based on the chapter that is provided and would help an ai to learn the topic by examples.
Focus only on the topic that is given in the chapter when generating the examples.
Let the example be in the {random.choice(industries)} industry.
Follow this structure and put each conversation in a list of objects in json format. Only return the json nothing more:
{conversation_structure}
Generate {y} lists of those conversations
Chapter:{chapter}"""
return prompt
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,403 | [
[
-0.01055145263671875,
-0.0653076171875,
0.0180511474609375,
-0.005970001220703125,
-0.006671905517578125,
-0.021697998046875,
-0.01378631591796875,
-0.004108428955078125,
0.0035228729248046875,
0.048248291015625,
-0.055938720703125,
-0.056488037109375,
-0.004665... |
Vaibhav9401/toxic75k | 2023-09-22T16:39:35.000Z | [
"region:us"
] | Vaibhav9401 | null | null | 0 | 15 | 2023-09-22T16:29:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: llama_finetune_text
dtype: string
splits:
- name: train
num_bytes: 61395720
num_examples: 72313
download_size: 11452836
dataset_size: 61395720
---
# Dataset Card for "toxic75k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0270233154296875,
-0.005802154541015625,
0.01081085205078125,
0.0191192626953125,
-0.0200653076171875,
0.00519561767578125,
0.01898193359375,
-0.01318359375,
0.052337646484375,
0.038421630859375,
-0.057647705078125,
-0.06402587890625,
-0.03753662109375,
-... |
ssahir/english_finance_news | 2023-09-25T10:18:49.000Z | [
"region:us"
] | ssahir | null | null | 1 | 15 | 2023-09-25T06:40:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: newssource
dtype: string
- name: newscontents
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4297005.661361627
num_examples: 24429
- name: test
num_bytes: 477562.3386383731
num_examples: 2715
download_size: 0
dataset_size: 4774568.0
---
# Dataset Card for "english_finance_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.0202789306640625,
-0.0199737548828125,
0.01227569580078125,
0.0297698974609375,
-0.0264739990234375,
0.01019287109375,
-0.00307464599609375,
-0.01546478271484375,
0.063232421875,
0.0204315185546875,
-0.0491943359375,
-0.061065673828125,
-0.0443115234375,
... |
tyzhu/squad_for_gpt_train_1000_100 | 2023-09-25T09:48:13.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 15 | 2023-09-25T07:26:43 | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 3564228.0
num_examples: 1000
- name: validation
num_bytes: 371624
num_examples: 100
download_size: 2479909
dataset_size: 3935852.0
---
# Dataset Card for "squad_for_gpt_train_1000_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 700 | [
[
-0.03875732421875,
-0.01018524169921875,
0.0147857666015625,
0.0281219482421875,
-0.006744384765625,
0.00560760498046875,
0.0276336669921875,
0.00937652587890625,
0.041046142578125,
0.011260986328125,
-0.0811767578125,
-0.036041259765625,
-0.036712646484375,
... |
erhwenkuo/alpaca-data-gpt4-chinese-zhtw | 2023-09-26T14:03:00.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"gpt4",
"alpaca",
"instruction-finetuning",
"arxiv:2304.03277",
"region:us"
] | erhwenkuo | null | null | 1 | 15 | 2023-09-26T13:42:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 33817106
num_examples: 52049
download_size: 22275874
dataset_size: 33817106
task_categories:
- text-generation
- conversational
- question-answering
language:
- zh
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- gpt4
- alpaca
- instruction-finetuning
pretty_name: ' alpaca-data-gpt4-chinese-zhtw'
size_categories:
- 10K<n<100K
---
# Dataset Card for "alpaca-data-gpt4-chinese-zhtw"
This dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This dataset is a translation from English to Chinese.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). | 3,454 | [
[
-0.0311126708984375,
-0.08258056640625,
0.0302886962890625,
0.0221099853515625,
-0.0390625,
-0.00699615478515625,
0.004180908203125,
-0.032440185546875,
0.033111572265625,
0.043792724609375,
-0.08056640625,
-0.052001953125,
-0.04388427734375,
0.0191955566406... |
DanArnin/Hinglish2 | 2023-09-27T05:24:38.000Z | [
"region:us"
] | DanArnin | null | null | 0 | 15 | 2023-09-27T05:24:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AnthonyRayo/AutomAssist3 | 2023-09-28T09:21:43.000Z | [
"region:us"
] | AnthonyRayo | null | null | 0 | 15 | 2023-09-28T09:21:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rmanluo/RoG-cwq | 2023-10-01T23:47:36.000Z | [
"region:us"
] | rmanluo | null | null | 1 | 15 | 2023-10-01T23:29:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
sequence: string
- name: q_entity
sequence: string
- name: a_entity
sequence: string
- name: graph
sequence:
sequence: string
- name: choices
sequence: 'null'
splits:
- name: train
num_bytes: 8890766478
num_examples: 27639
- name: validation
num_bytes: 1170336525
num_examples: 3519
- name: test
num_bytes: 1208452620
num_examples: 3531
download_size: 1993772283
dataset_size: 11269555623
---
# Dataset Card for "RoG-cwq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 913 | [
[
-0.029632568359375,
-0.021759033203125,
-0.00011783838272094727,
0.0014286041259765625,
-0.018310546875,
0.00789642333984375,
0.026336669921875,
-0.0138397216796875,
0.0406494140625,
0.04058837890625,
-0.0697021484375,
-0.058807373046875,
-0.0321044921875,
-... |
Dong237/empathetic_dialogues_instruction | 2023-10-03T18:30:50.000Z | [
"region:us"
] | Dong237 | null | null | 0 | 15 | 2023-10-03T18:30:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: dialogue
dtype: string
splits:
- name: train
num_bytes: 6392746
num_examples: 17780
- name: validation
num_bytes: 1076044
num_examples: 2758
- name: test
num_bytes: 1037401
num_examples: 2540
download_size: 4612892
dataset_size: 8506191
---
# Dataset Card for "empathetic_dialogues_instruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 721 | [
[
-0.0252685546875,
-0.0445556640625,
0.02545166015625,
0.01611328125,
0.0022983551025390625,
-0.00795745849609375,
-0.006725311279296875,
0.012542724609375,
0.058563232421875,
0.0236358642578125,
-0.0767822265625,
-0.058685302734375,
-0.039398193359375,
-0.02... |
tessiw/german_OpenOrca_Format1 | 2023-10-11T15:53:01.000Z | [
"region:us"
] | tessiw | null | null | 0 | 15 | 2023-10-04T11:02:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 462202853
num_examples: 250000
download_size: 254684069
dataset_size: 462202853
---
# Dataset Card for "german_OpenOrca_Format1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 490 | [
[
-0.04888916015625,
-0.030364990234375,
0.0019626617431640625,
0.0297698974609375,
-0.0188140869140625,
-0.028594970703125,
-0.001354217529296875,
-0.006031036376953125,
0.062469482421875,
0.0304107666015625,
-0.049713134765625,
-0.078857421875,
-0.03594970703125... |
the-rizz/the-rizz-corpus | 2023-10-04T14:43:56.000Z | [
"region:us"
] | the-rizz | null | null | 0 | 15 | 2023-10-04T14:43:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tural/wiki-unzh | 2023-10-05T10:09:40.000Z | [
"region:us"
] | Tural | null | null | 0 | 15 | 2023-10-05T09:57:04 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 20277571711
num_examples: 6458670
download_size: 11689463675
dataset_size: 20277571711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wiki-unzh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.038848876953125,
-0.00534820556640625,
0.01007843017578125,
0.007160186767578125,
-0.0325927734375,
-0.002590179443359375,
0.01480865478515625,
-0.0028057098388671875,
0.04266357421875,
0.035369873046875,
-0.062408447265625,
-0.05352783203125,
-0.023910522460... |
tog/dolphin_5k_test | 2023-10-06T15:06:19.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | tog | null | null | 0 | 15 | 2023-10-06T14:46:00 | ---
language:
- en
license: apache-2.0
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8726321.400179625
num_examples: 5000
download_size: 4973800
dataset_size: 8726321.400179625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Tiny Dolphin 🐬
see https://erichartford.com/dolphin
## Dataset details
This dataset is an extract of ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl). It is derived from this [dataset](https://huggingface.co/datasets/ehartford/dolphin)
### Loading
```python
dataset = load_dataset("tog/dolphin_5k_test)
```
This dataset is licensed apache-2.0 for commercial or non-commercial use. | 866 | [
[
-0.0655517578125,
-0.0251312255859375,
0.00933074951171875,
0.007320404052734375,
-0.01995849609375,
-0.033233642578125,
0.004337310791015625,
-0.0523681640625,
0.033172607421875,
0.05084228515625,
-0.05035400390625,
0.0003600120544433594,
-0.044830322265625,
... |
Safeer143/eli5_dataset_title_text | 2023-10-18T10:56:46.000Z | [
"region:us"
] | Safeer143 | null | null | 0 | 15 | 2023-10-07T22:15:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1224245207
num_examples: 1442904
download_size: 0
dataset_size: 1224245207
---
# Dataset Card for "eli5_dataset_title_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 454 | [
[
-0.036834716796875,
-0.022216796875,
0.0187530517578125,
0.0056915283203125,
-0.01549530029296875,
-0.00298309326171875,
0.01520538330078125,
-0.0129852294921875,
0.045196533203125,
0.034820556640625,
-0.052703857421875,
-0.052764892578125,
-0.04522705078125,
... |
RorooroR/JazzHiphop | 2023-10-09T09:03:32.000Z | [
"region:us"
] | RorooroR | null | null | 0 | 15 | 2023-10-09T08:06:37 | ---
dataset_info:
features:
- name: image
dtype: image
- name: audio_file
dtype: string
- name: slice
dtype: int16
splits:
- name: train
num_bytes: 191805587.75
num_examples: 4378
download_size: 191445041
dataset_size: 191805587.75
---
# Dataset Card for "JazzHiphop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 436 | [
[
-0.031890869140625,
-0.002010345458984375,
0.006938934326171875,
0.034027099609375,
-0.0009508132934570312,
0.0095062255859375,
0.0125579833984375,
-0.0198822021484375,
0.061798095703125,
0.042633056640625,
-0.07342529296875,
-0.050628662109375,
-0.0321350097656... |
Skiittoo/cartoon-faces | 2023-10-09T13:14:29.000Z | [
"region:us"
] | Skiittoo | null | null | 0 | 15 | 2023-10-09T13:13:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 646360781.0
num_examples: 10000
download_size: 647319030
dataset_size: 646360781.0
---
# Dataset Card for "cartoon-faces"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.05328369140625,
-0.017791748046875,
0.0043487548828125,
0.0274200439453125,
-0.01505279541015625,
0.01306915283203125,
0.01021575927734375,
-0.016021728515625,
0.078369140625,
0.037567138671875,
-0.06488037109375,
-0.03912353515625,
-0.0457763671875,
-0.0... |
qazisaad/news_recommendations_base_vectorized | 2023-10-09T14:16:57.000Z | [
"region:us"
] | qazisaad | null | null | 0 | 15 | 2023-10-09T14:16:55 | ---
dataset_info:
features:
- name: category
dtype: string
- name: sub-category
dtype: string
- name: title
dtype: string
- name: times
dtype: timestamp[ns]
- name: url
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 7692557
num_examples: 3981
download_size: 9317253
dataset_size: 7692557
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "news_recommendations_base_vectorized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 659 | [
[
-0.037139892578125,
-0.0185089111328125,
0.01047515869140625,
0.02093505859375,
-0.0258941650390625,
-0.00838470458984375,
0.01033782958984375,
0.004077911376953125,
0.06182861328125,
0.029266357421875,
-0.056610107421875,
-0.07867431640625,
-0.045562744140625,
... |
iara-project/train_split_with_embeddings_bert_base_portuguese | 2023-10-09T23:47:22.000Z | [
"region:us"
] | iara-project | null | null | 0 | 15 | 2023-10-09T23:46:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: news_id
dtype: int64
- name: embeddings
sequence: float64
- name: sentence
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1670924670
num_examples: 176114
download_size: 1232112225
dataset_size: 1670924670
---
# Dataset Card for "train_split_with_embeddings_bert_base_portuguese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 606 | [
[
-0.04583740234375,
-0.0174407958984375,
0.004486083984375,
0.037261962890625,
-0.03521728515625,
0.00569915771484375,
-0.0025196075439453125,
-0.00927734375,
0.06610107421875,
0.0213775634765625,
-0.050567626953125,
-0.045928955078125,
-0.051849365234375,
-0... |
madaanpulkit/tab-wnut | 2023-11-02T06:07:27.000Z | [
"region:us"
] | madaanpulkit | null | null | 0 | 15 | 2023-10-11T07:38:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: tagged_text
sequence: string
- name: tags
sequence:
class_label:
names:
'0': '0'
'1': B-DIRECT-CODE
'2': I-DIRECT-CODE
'3': B-DIRECT-PERSON
'4': I-DIRECT-PERSON
'5': B-QUASI-DATETIME
'6': I-QUASI-DATETIME
'7': B-QUASI-PERSON
'8': I-QUASI-PERSON
'9': B-QUASI-LOC
'10': I-QUASI-LOC
'11': B-QUASI-QUANTITY
'12': I-QUASI-QUANTITY
'13': B-QUASI-CODE
'14': I-QUASI-CODE
'15': B-QUASI-ORG
'16': I-QUASI-ORG
'17': B-QUASI-DEM
'18': I-QUASI-DEM
'19': B-QUASI-MISC
'20': I-QUASI-MISC
'21': B-DIRECT-ORG
'22': I-DIRECT-ORG
'23': B-DIRECT-DATETIME
'24': I-DIRECT-DATETIME
'25': B-DIRECT-LOC
'26': I-DIRECT-LOC
'27': B-DIRECT-MISC
'28': I-DIRECT-MISC
'29': B-DIRECT-DEM
'30': I-DIRECT-DEM
splits:
- name: train
num_bytes: 45872319
num_examples: 1014
- name: dev
num_bytes: 3749307
num_examples: 127
- name: test
num_bytes: 3619745
num_examples: 127
download_size: 11056816
dataset_size: 53241371
---
# Dataset Card for "tab-wnut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,530 | [
[
-0.044189453125,
-0.031890869140625,
0.007213592529296875,
0.0106964111328125,
-0.01197052001953125,
0.0184173583984375,
-0.004848480224609375,
-0.00687408447265625,
0.067626953125,
0.03900146484375,
-0.05859375,
-0.05743408203125,
-0.0236663818359375,
-0.01... |
zenn19991231/ADL_HW1_Datas | 2023-10-12T13:25:40.000Z | [
"region:us"
] | zenn19991231 | null | null | 0 | 15 | 2023-10-12T13:02:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sandeep12345/roberta_finetune | 2023-10-16T15:38:14.000Z | [
"region:us"
] | sandeep12345 | null | null | 0 | 15 | 2023-10-12T18:52:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
OpenPipe/hacker-news | 2023-11-02T13:41:53.000Z | [
"region:us"
] | OpenPipe | null | null | 0 | 15 | 2023-10-13T19:44:25 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: type
dtype: string
- name: by
dtype: string
- name: time
dtype: timestamp[us]
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: score
dtype: float64
- name: parent
dtype: float64
- name: top_level_parent
dtype: int64
- name: descendants
dtype: float64
- name: kids
sequence: int64
- name: deleted
dtype: bool
- name: dead
dtype: bool
splits:
- name: train
num_bytes: 16886975696
num_examples: 38109500
download_size: 9948795138
dataset_size: 16886975696
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Hacker News posts and comments
This is a dataset of all HN posts and comments, current as of November 1, 2023. | 858 | [
[
-0.007282257080078125,
-0.07513427734375,
0.050262451171875,
0.0233001708984375,
-0.0238494873046875,
-0.0010271072387695312,
0.0127105712890625,
-0.026763916015625,
0.07672119140625,
0.074951171875,
-0.0411376953125,
-0.03851318359375,
-0.022674560546875,
0... |
phatjk/wikipedia_vi_qa | 2023-10-14T06:32:07.000Z | [
"region:us"
] | phatjk | null | null | 0 | 15 | 2023-10-14T06:32:05 | ---
dataset_info:
features:
- name: text
dtype: string
- name: question
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 8523200
num_examples: 20107
download_size: 4759406
dataset_size: 8523200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia_vi_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 514 | [
[
-0.050537109375,
-0.02215576171875,
0.01678466796875,
0.0029163360595703125,
-0.0200042724609375,
-0.01495361328125,
0.0186309814453125,
-0.01018524169921875,
0.05908203125,
0.01334381103515625,
-0.051849365234375,
-0.05712890625,
-0.01500701904296875,
-0.01... |
Andrei481/alpaca-gpt4-ro-subset | 2023-10-14T11:53:04.000Z | [
"region:us"
] | Andrei481 | null | null | 0 | 15 | 2023-10-14T11:52:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
godoyj/pt-squad-generate-answer | 2023-10-15T14:54:30.000Z | [
"region:us"
] | godoyj | null | null | 0 | 15 | 2023-10-15T14:53:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
dtype: string
- name: answers
struct:
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 78166150
num_examples: 87510
- name: validation
num_bytes: 9717596
num_examples: 10570
download_size: 19115754
dataset_size: 87883746
---
# Dataset Card for "pt-squad-generate-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 721 | [
[
-0.0455322265625,
-0.0284576416015625,
0.0168609619140625,
0.0244598388671875,
-0.01922607421875,
0.007106781005859375,
0.0289154052734375,
-0.0007634162902832031,
0.050048828125,
0.021697998046875,
-0.089111328125,
-0.03179931640625,
-0.033843994140625,
-0.... |
Nathan757/arxiv | 2023-10-15T22:02:04.000Z | [
"region:us"
] | Nathan757 | null | null | 0 | 15 | 2023-10-15T21:45:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
NoahBSchwartz/LLM-Link-Reward-Model-Training | 2023-10-16T21:32:22.000Z | [
"region:us"
] | NoahBSchwartz | null | null | 0 | 15 | 2023-10-16T21:07:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HumanCompatibleAI/random-seals-Hopper-v1 | 2023-10-17T05:39:21.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 15 | 2023-10-17T05:39:04 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 68885506
num_examples: 100
download_size: 31758126
dataset_size: 68885506
---
# Dataset Card for "random-seals-Hopper-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.038330078125,
-0.01136016845703125,
0.00492095947265625,
0.01468658447265625,
-0.0259552001953125,
-0.01364898681640625,
0.049713134765625,
-0.02081298828125,
0.0745849609375,
0.04412841796875,
-0.06793212890625,
-0.0460205078125,
-0.059600830078125,
-0.0... |
HumanCompatibleAI/random-seals-Swimmer-v1 | 2023-10-17T05:41:05.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 15 | 2023-10-17T05:40:37 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 138046530
num_examples: 100
download_size: 36347782
dataset_size: 138046530
---
# Dataset Card for "random-seals-Swimmer-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 550 | [
[
-0.03948974609375,
-0.0018520355224609375,
0.0200042724609375,
0.017303466796875,
-0.040679931640625,
-0.0103912353515625,
0.045013427734375,
-0.021331787109375,
0.065185546875,
0.042816162109375,
-0.0635986328125,
-0.04315185546875,
-0.05096435546875,
-0.01... |
garrett361/lore_mc_task_test | 2023-10-17T14:01:51.000Z | [
"region:us"
] | garrett361 | null | null | 0 | 15 | 2023-10-17T14:01:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: number
dtype: string
- name: gold
dtype: string
- name: choices
sequence: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 10887.5
num_examples: 50
- name: validation
num_bytes: 5443.75
num_examples: 25
- name: test
num_bytes: 5443.75
num_examples: 25
download_size: 17841
dataset_size: 21775.0
---
# Dataset Card for "lore_mc_task_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 761 | [
[
-0.0273284912109375,
-0.02728271484375,
0.01751708984375,
0.01535797119140625,
-0.0015697479248046875,
0.0018777847290039062,
0.0191497802734375,
-0.01209259033203125,
0.053924560546875,
0.03900146484375,
-0.08026123046875,
-0.05328369140625,
-0.038330078125,
... |
sargishunanyan/thermo-classification | 2023-10-18T16:32:26.000Z | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"region:us"
] | sargishunanyan | null | @misc{ proj-2-qmdk0_dataset,
title = { proj 2 Dataset },
type = { Open Source Dataset },
author = { Yolo },
howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 } },
url = { https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-18 },
} | 0 | 15 | 2023-10-18T16:27:45 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="sargishunanyan/thermo-classification" src="https://huggingface.co/datasets/sargishunanyan/thermo-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Thermostat', 'Housing', 'Insert']
```
### Number of Images
```json
{'valid': 102, 'test': 52, 'train': 372}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("sargishunanyan/thermo-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0/dataset/3](https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ proj-2-qmdk0_dataset,
title = { proj 2 Dataset },
type = { Open Source Dataset },
author = { Yolo },
howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 } },
url = { https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on October 8, 2023 at 7:58 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 526 images.
Car-parts are annotated in folder format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| 2,203 | [
[
-0.02349853515625,
-0.0102691650390625,
0.0208282470703125,
-0.0127105712890625,
-0.0207672119140625,
-0.0157623291015625,
-0.0116119384765625,
-0.027099609375,
0.0187835693359375,
0.0145263671875,
-0.038238525390625,
-0.04364013671875,
-0.0255584716796875,
... |
jstack32/LatinAccents | 2023-10-20T22:04:35.000Z | [
"task_categories:automatic-speech-recognition",
"source_datasets:extended|common_voice",
"language:en",
"license:apache-2.0",
"region:us"
] | jstack32 | null | null | 0 | 15 | 2023-10-18T20:59:39 | ---
language:
- en
license: apache-2.0
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
bn:
- 100K<n<1M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 1K<n<10K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 1M<n<10M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 1K<n<10K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
it:
- 100K<n<1M
ja:
- 10K<n<100K
ka:
- 10K<n<100K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lt:
- 10K<n<100K
lv:
- 1K<n<10K
mdf:
- n<1K
mhr:
- 100K<n<1M
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mrj:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- 1K<n<10K
sk:
- 10K<n<100K
skr:
- 1K<n<10K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
ti:
- n<1K
tig:
- n<1K
tok:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
tw:
- n<1K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 102
num_examples: 2
download_size: 0
dataset_size: 102
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 6,848 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
zhen-dong-nexusflow/cvecpe_nested_multiapis_nlq_function_pairs | 2023-10-27T23:35:09.000Z | [
"region:us"
] | zhen-dong-nexusflow | null | null | 0 | 15 | 2023-10-18T22:04:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pablouribe/ocr_correction_fr | 2023-10-19T14:11:23.000Z | [
"region:us"
] | pablouribe | null | null | 0 | 15 | 2023-10-19T14:11:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: ocr_text
dtype: string
splits:
- name: train
num_bytes: 49989671.1
num_examples: 4500
- name: test
num_bytes: 5554407.9
num_examples: 500
download_size: 33241561
dataset_size: 55544079.0
---
# Dataset Card for "ocr_correction_fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 589 | [
[
-0.0255126953125,
-0.0196533203125,
0.011688232421875,
-0.00850677490234375,
-0.0026264190673828125,
-0.0097808837890625,
0.00536346435546875,
-0.0307159423828125,
0.04046630859375,
0.044677734375,
-0.0455322265625,
-0.047454833984375,
-0.03485107421875,
-0.... |
sam2ai/hindi_story_cloze_mini | 2023-10-20T20:06:35.000Z | [
"region:us"
] | sam2ai | null | null | 0 | 15 | 2023-10-19T21:05:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: train
num_bytes: 39375
num_examples: 50
- name: eval
num_bytes: 39375
num_examples: 50
download_size: 55954
dataset_size: 78750
---
# Dataset Card for "hindi_story_cloze"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 849 | [
[
-0.03253173828125,
-0.0257568359375,
-0.0009202957153320312,
0.0247955322265625,
-0.036712646484375,
0.0012416839599609375,
-0.0034198760986328125,
-0.0194244384765625,
0.06427001953125,
0.021087646484375,
-0.058807373046875,
-0.0570068359375,
-0.0538330078125,
... |
jay401521/twolabels_test | 2023-10-21T09:26:20.000Z | [
"region:us"
] | jay401521 | null | null | 0 | 15 | 2023-10-21T09:18:33 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype: string
- name: label
dtype: int64
- name: rank
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1845580.6666666667
num_examples: 20014
download_size: 911747
dataset_size: 1845580.6666666667
---
# Dataset Card for "twolabels_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 512 | [
[
-0.041168212890625,
-0.0308685302734375,
-0.0012912750244140625,
0.0202178955078125,
-0.0079803466796875,
-0.00032973289489746094,
0.013671875,
-0.0201416015625,
0.042816162109375,
0.02740478515625,
-0.047943115234375,
-0.044342041015625,
-0.046905517578125,
... |
maxolotl/must-c-en-es-wait3-01 | 2023-10-22T06:40:33.000Z | [
"region:us"
] | maxolotl | null | null | 0 | 15 | 2023-10-22T06:40:15 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 995393073
num_examples: 5241096
- name: test
num_bytes: 9963278
num_examples: 57200
- name: validation
num_bytes: 5434544
num_examples: 27561
download_size: 184391223
dataset_size: 1010790895
---
# Dataset Card for "must-c-en-es-wait3-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 597 | [
[
-0.05047607421875,
-0.01085662841796875,
0.031402587890625,
0.051971435546875,
-0.0091094970703125,
-0.00852203369140625,
0.023590087890625,
-0.0325927734375,
0.061065673828125,
0.0423583984375,
-0.08233642578125,
-0.044921875,
-0.043701171875,
0.01409149169... |
AdapterOcean/physics_dataset_standardized_cluster_0 | 2023-10-23T01:51:43.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 15 | 2023-10-22T18:30:31 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 16829331
num_examples: 1511
download_size: 0
dataset_size: 16829331
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 582 | [
[
-0.0288848876953125,
-0.0148773193359375,
0.033477783203125,
0.020782470703125,
-0.019012451171875,
-0.00010031461715698242,
0.024139404296875,
-0.00487518310546875,
0.06488037109375,
0.01029205322265625,
-0.04779052734375,
-0.060699462890625,
-0.032562255859375... |
godoyj/cstnews-pt | 2023-10-22T23:57:29.000Z | [
"region:us"
] | godoyj | null | null | 0 | 15 | 2023-10-22T21:05:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
atmallen/sharegpt-binary | 2023-10-23T21:50:35.000Z | [
"region:us"
] | atmallen | null | null | 0 | 15 | 2023-10-23T05:40:21 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: model
dtype: string
splits:
- name: test
num_bytes: 1090167
num_examples: 243
download_size: 188810
dataset_size: 1090167
---
# Dataset Card for "sharegpt-binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 583 | [
[
-0.049346923828125,
-0.016693115234375,
0.01371002197265625,
0.0258941650390625,
-0.02484130859375,
0.00640869140625,
0.021392822265625,
-0.015411376953125,
0.05828857421875,
0.016204833984375,
-0.060638427734375,
-0.050628662109375,
-0.06268310546875,
-0.03... |
kardosdrur/folketinget-discussions | 2023-10-24T11:53:06.000Z | [
"license:mit",
"region:us"
] | kardosdrur | null | null | 0 | 15 | 2023-10-24T08:48:35 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: comment
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 7032676.035654362
num_examples: 3814
- name: test
num_bytes: 1759090.9643456375
num_examples: 954
download_size: 4898174
dataset_size: 8791767.0
---
# Discussions in Folketinget
The dataset is based on data from Folketinget in the Danish Gigaword corpus.
Comment-response pairs are purely extracted on the basis of heuristics, and have not been manually evaluated.
The dataset was created for aiding the training of sentence transformer models in the Danish Foundation Models project.
The dataset is currently not recommended for production use.
| 848 | [
[
-0.0438232421875,
-0.045623779296875,
0.032958984375,
0.00882720947265625,
-0.015838623046875,
0.01020050048828125,
-0.01287078857421875,
-0.03515625,
0.04327392578125,
0.06109619140625,
-0.05572509765625,
-0.011810302734375,
-0.033782958984375,
0.0235443115... |
zelros/pj-sg | 2023-11-02T12:27:32.000Z | [
"region:us"
] | zelros | null | null | 0 | 15 | 2023-10-24T19:49:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BEE-spoke-data/Long-Data-Col-rp_pile_pretrain | 2023-10-26T02:01:57.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"source_datasets:togethercomputer/Long-Data-Collections",
"license:other",
"long boi",
"region:us"
] | BEE-spoke-data | null | null | 0 | 15 | 2023-10-25T01:52:15 | ---
license: other
size_categories:
- 1M<n<10M
source_datasets: togethercomputer/Long-Data-Collections
task_categories:
- text-generation
- fill-mask
- feature-extraction
configs:
- config_name: cleaned
data_files:
- split: train
path: cleaned/train-*
- config_name: cleaned-dedup
data_files:
- split: train
path: cleaned-dedup/train-*
- config_name: cleaned-dedup-en
data_files:
- split: train
path: cleaned-dedup-en/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
- config_name: cleaned
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 16969436991
num_examples: 2759555
download_size: 9521997027
dataset_size: 16969436991
- config_name: cleaned-dedup
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 13009681081
num_examples: 2712907
download_size: 7319241627
dataset_size: 13009681081
- config_name: cleaned-dedup-en
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 12723856310.202166
num_examples: 2653304
download_size: 7180653999
dataset_size: 12723856310.202166
- config_name: default
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 16821991568.354612
num_examples: 2759555
download_size: 9685120636
dataset_size: 16821991568.354612
tags:
- long boi
---
# Dataset Card for "Long-Data-Col-rp_pile_pretrain"
This dataset is a subset of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections), namely the `rp_sub.jsonl.zst` and `pile_sub.jsonl.zst` files from the `pretrain` split.
Like the source dataset, we do not attempt to modify/change licenses of underlying data. Refer to the source dataset (and its source datasets) for details.
## changes
1. as this is supposed to be a "long text dataset", we drop all rows where `text` contains <= 250 characters. This drops approx 100k rows from the raw data. Resulting stats are below.
| | text_len |
|:------|----------------:|
| count | 2.75956e+06 |
| mean | 6195.11 |
| std | 56364.9 |
| min | 251 |
| 25% | 1102 |
| 50% | 2147 |
| 75% | 4762 |
| max | 4.66452e+07 |
--- | 2,474 | [
[
-0.04803466796875,
-0.0294036865234375,
0.0160980224609375,
0.0128631591796875,
-0.056640625,
0.003936767578125,
-0.016143798828125,
-0.0244140625,
0.049285888671875,
0.041412353515625,
-0.062469482421875,
-0.055267333984375,
-0.0433349609375,
0.016281127929... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.