id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
advancedcv/Food500Cap_test | advancedcv | 2023-10-24T20:01:10Z | 113 | 0 | null | [
"region:us"
] | 2023-10-24T20:01:10Z | 2023-10-24T19:59:07.000Z | 2023-10-24T19:59:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
magnus42/GPTWebScrapingPythonCode | magnus42 | 2023-11-23T16:47:35Z | 113 | 0 | null | [
"region:us"
] | 2023-11-23T16:47:35Z | 2023-10-26T08:12:12.000Z | 2023-10-26T08:12:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: "train.json"
- split: test
path: "test.json"
- split: validation
path: "validation.json"
- split: checked
path: "dataset_checked.json"
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arpanetus/hackernews_title_upvote_0 | arpanetus | 2023-11-18T13:45:05Z | 113 | 0 | null | [
"region:us"
] | 2023-11-18T13:45:05Z | 2023-11-18T09:52:12.000Z | 2023-11-18T09:52:12 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 13840984
num_examples: 15064
download_size: 8346861
dataset_size: 13840984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hackernews_title_upvote_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33948078751564026,
-0.2287674844264984,
0.31981316208839417,
0.48500144481658936,
-0.33578187227249146,
0.2679192125797272,
0.3978283703327179,
0.15984030067920685,
1.063353419303894,
0.5301699042320251,
-0.5724952816963196,
-0.793289065361023,
-0.5223082304000854,
-0.26926714181900024,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qanastek/ECDC | qanastek | 2022-10-23T04:59:32Z | 112 | 1 | null | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:en-sv",
"multilinguality:en-pl",
"multilinguality:en-hu",
"multilinguality:en-lt",
"multilinguality:en-sk",
"multilinguality:en-ga",
"mult... | 2022-10-23T04:59:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- en-sv
- en-pl
- en-hu
- en-lt
- en-sk
- en-ga
- en-fr
- en-cs
- en-el
- en-it
- en-lv
- en-da
- en-nl
- en-bg
- en-is
- en-ro
- en-no
- en-pt
- en-es
- en-et
- en-mt
- en-sl
- en-fi
- en-de
pretty_name: ECDC
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# ECDC : An overview of the European Union's highly multilingual parallel corpora
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Repository:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Paper:** https://dl.acm.org/doi/10.1007/s10579-014-9277-0
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. The data gets distributed via the [web pages of the EC's Joint Research Centre (JRC)](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for all 22 different languages from the European Union (EU).
**List of languages :** `English (en)`, `Swedish (sv)`, `Polish (pl)`, `Hungarian (hu)`,`Lithuanian (lt)`, `Latvian (lv)`, `German (de)`, `Finnish (fi)`, `Slovak (sk)`,`Slovenian (sl)`, `French (fr)`, ,`Czech (cs)`,`Danish (da)`, `Italian (it)`,`Maltese (mt)`,`Dutch (nl)`,`Portuguese (pt)`,`Romanian (ro)`, `Spanish (es)`,`Estonian (et)`, `Bulgarian (bg)`,`Greek (el)`, `Irish (ga)`, `Icelandic (is)` and `Norwegian (no)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/ECDC", "en-it", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
key,lang,source_text,target_text
doc_0,en-bg,Vaccination against hepatitis C is not yet available.,Засега няма ваксина срещу хепатит С.
doc_1355,en-bg,Varicella infection,Инфекция с варицела
doc_2349,en-bg,"If you have any questions about the processing of your e-mail and related personal data, do not hesitate to include them in your message.","Ако имате въпроси относно обработката на вашия адрес на електронна поща и свързаните лични данни, не се колебайте да ги включите в съобщението си."
doc_192,en-bg,Transmission can be reduced especially by improving hygiene in food production handling.,Предаването на инфекцията може да бъде ограничено особено чрез подобряване на хигиената при манипулациите в хранителната индустрия.
```
### Data Fields
**key** : The document identifier `String`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
|lang | key |
|-----|-----|
|en-bg|2567 |
|en-cs|2562 |
|en-da|2577 |
|en-de|2560 |
|en-el|2530 |
|en-es|2564 |
|en-et|2581 |
|en-fi|2617 |
|en-fr|2561 |
|en-ga|1356 |
|en-hu|2571 |
|en-is|2511 |
|en-it|2534 |
|en-lt|2545 |
|en-lv|2542 |
|en-mt|2539 |
|en-nl|2510 |
|en-no|2537 |
|en-pl|2546 |
|en-pt|2531 |
|en-ro|2555 |
|en-sk|2525 |
|en-sl|2545 |
|en-sv|2527 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
Every data of this corpora as been uploaded by on [JRC](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face ECDC__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__An overview of the European Union's highly multilingual parallel corpora__: Steinberger Ralf, Mohamed Ebrahim, Alexandros Poulis, Manuel Carrasco-Benitez, Patrick Schlüter, Marek Przybyszewski & Signe Gilbro.
### Licensing Information
By downloading or using the ECDC-Translation Memory, you are bound by the [ECDC-TM usage conditions (PDF)](https://wt-public.emm4u.eu/Resources/ECDC-TM/2012_10_Terms-of-Use_ECDC-TM.pdf).
### No Warranty
Each Work is provided ‘as is’ without, to the full extent permitted by law, representations,
warranties, obligations and liabilities of any kind, either express or implied, including, but
not limited to, any implied warranty of merchantability, integration, satisfactory quality and
fitness for a particular purpose.
Except in the cases of wilful misconduct or damages directly caused to natural persons, the
Owner will not be liable for any incidental, consequential, direct or indirect damages,
including, but not limited to, the loss of data, lost profits or any other financial loss arising
from the use of, or inability to use, the Work even if the Owner has been notified of the
possibility of such loss, damages, claims or costs, or for any claim by any third party. The
Owner may be liable under national statutory product liability laws as far as such laws apply
to the Work.
### Citation Information
Please cite the following paper when using this dataset.
```latex
@article{10.1007/s10579-014-9277-0,
author = {Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl\"{u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe},
title = {An Overview of the European Union's Highly Multilingual Parallel Corpora},
year = {2014},
issue_date = {December 2014},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
volume = {48},
number = {4},
issn = {1574-020X},
url = {https://doi.org/10.1007/s10579-014-9277-0},
doi = {10.1007/s10579-014-9277-0},
abstract = {Starting in 2006, the European Commission's Joint Research Centre and other European Union organisations have made available a number of large-scale highly-multilingual parallel language resources. In this article, we give a comparative overview of these resources and we explain the specific nature of each of them. This article provides answers to a number of question, including: What are these linguistic resources? What is the difference between them? Why were they originally created and why was the data released publicly? What can they be used for and what are the limitations of their usability? What are the text types, subject domains and languages covered? How to avoid overlapping document sets? How do they compare regarding the formatting and the translation alignment? What are their usage conditions? What other types of multilingual linguistic resources does the EU have? This article thus aims to clarify what the similarities and differences between the various resources are and what they can be used for. It will also serve as a reference publication for those resources, for which a more detailed description has been lacking so far (EAC-TM, ECDC-TM and DGT-Acquis).},
journal = {Lang. Resour. Eval.},
month = {dec},
pages = {679–707},
numpages = {29},
keywords = {DCEP, EAC-TM, EuroVoc, JRC EuroVoc Indexer JEX, Parallel corpora, DGT-TM, Eur-Lex, Highly multilingual, Linguistic resources, DGT-Acquis, European Union, ECDC-TM, JRC-Acquis, Translation memory}
}
```
| [
-0.5333425402641296,
-0.5446415543556213,
0.329787939786911,
0.18800713121891022,
-0.22400254011154175,
0.08628110587596893,
-0.6380418539047241,
-0.3985064923763275,
0.31210750341415405,
0.28125736117362976,
-0.6054404377937317,
-1.0099096298217773,
-0.48872387409210205,
0.567888915538787... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/broad_twitter_corpus | strombergnlp | 2022-07-01T15:46:36Z | 112 | 4 | broad-twitter-corpus | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T15:46:36Z | 2022-04-28T09:58:09.000Z | 2022-04-28T09:58:09 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: broad-twitter-corpus
pretty_name: Broad Twitter Corpus
---
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| [
-0.571506679058075,
-0.6783424019813538,
0.22221536934375763,
0.2693931758403778,
-0.2752426564693451,
0.2584401071071625,
-0.5791414976119995,
-0.5375292897224426,
0.6024042367935181,
0.26595789194107056,
-0.5979828834533691,
-0.9988105893135071,
-0.6679815649986267,
0.14541885256767273,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s3prl/mini_voxceleb1 | s3prl | 2022-06-19T18:49:50Z | 112 | 0 | null | [
"region:us"
] | 2022-06-19T18:49:50Z | 2022-06-19T12:06:16.000Z | 2022-06-19T12:06:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
merkalo-ziri/qa_main | merkalo-ziri | 2022-08-24T08:54:01Z | 112 | 0 | null | [
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:rus",
"license:other",
"region:us"
] | 2022-08-24T08:54:01Z | 2022-08-22T07:03:04.000Z | 2022-08-22T07:03:04 | ---
annotations_creators:
- found
language:
- rus
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: qa_main
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.2170247584581375,
0.24832050502300262,
-0.3366999626159668,
-0.3758932054042816,
0.6720379590988159,
0.6457639932632446,
-0.9167346358299255,
-1.2200126647949219,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1250000-1300000 | tomekkorbak | 2022-10-05T00:28:11Z | 112 | 0 | null | [
"region:us"
] | 2022-10-05T00:28:11Z | 2022-10-05T00:28:02.000Z | 2022-10-05T00:28:02 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alkzar90/cell_benchmark | alkzar90 | 2023-01-23T21:36:52Z | 112 | 0 | null | [
"region:us"
] | 2023-01-23T21:36:52Z | 2022-11-06T20:09:50.000Z | 2022-11-06T20:09:50 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qanastek/frenchmedmcqa | qanastek | 2023-06-08T12:39:22Z | 112 | 5 | frenchmedmcqa | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:original",
"lan... | 2023-06-08T12:39:22Z | 2023-01-08T20:22:47.000Z | 2023-01-08T20:22:47 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- fr
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1k<n<10k
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: frenchmedmcqa
pretty_name: FrenchMedMCQA
---
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "1863462668476003678",
"question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :",
"answers": {
"a": "Sont plus riches en cholestérol estérifié qu'en triglycérides",
"b": "Sont synthétisés par le foie",
"c": "Contiennent de l'apolipoprotéine B48",
"d": "Contiennent de l'apolipoprotéine E",
"e": "Sont transformés par action de la lipoprotéine lipase"
},
"correct_answers": [
"c",
"d",
"e"
],
"subject_name": "pharmacie",
"type": "multiple"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `answer_a` : Option A
- `answer_b` : Option B
- `answer_c` : Option C
- `answer_d` : Option D
- `answer_e` : Option E
- `correct_answers` : Correct options, i.e., A, D and E
- `choice_type` ({"single", "multiple"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multiple": Multi-choice question, where each choice contains a combination of multiple options.
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
| [
-0.500690221786499,
-0.816687822341919,
0.5514360070228577,
-0.1522037833929062,
0.0008780680946074426,
-0.006420601159334183,
0.10722178965806961,
-0.07081471383571625,
0.4965728223323822,
0.6670675277709961,
-0.741714358329773,
-0.7398572564125061,
-0.5869491100311279,
0.3826900422573089... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuanhuggingface/promptTTS_encodec_v2_small | kuanhuggingface | 2023-06-12T05:45:16Z | 112 | 0 | null | [
"region:us"
] | 2023-06-12T05:45:16Z | 2023-06-12T05:36:48.000Z | 2023-06-12T05:36:48 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 2975164369
num_examples: 47270
- name: validation
num_bytes: 97855975
num_examples: 1349
- name: test
num_bytes: 80754157
num_examples: 1350
download_size: 437609990
dataset_size: 3153774501
---
# Dataset Card for "promptTTS_encodec_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43004119396209717,
-0.1454964578151703,
0.23096200823783875,
0.2569301128387451,
-0.2397879958152771,
-0.06281925737857819,
0.26723432540893555,
0.040754444897174835,
0.7244606018066406,
0.46310436725616455,
-0.8126404881477356,
-0.7005292773246765,
-0.6786714792251587,
0.03183600306510... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
textminr/topic-labeling | textminr | 2023-11-15T19:51:26Z | 112 | 0 | null | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-11-15T19:51:26Z | 2023-11-11T23:15:02.000Z | 2023-11-11T23:15:02 | ---
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: base
default: true
data_files:
- split: train
path: base/train.jsonl
- split: validation
path: base/validation.jsonl
- split: test
path: base/test.jsonl
- config_name: mn-ds
data_files:
- split: train
path: mn-ds/train.jsonl
- config_name: original
data_files: original/topics.jsonl
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arpanetus/hackernews_title_upvote | arpanetus | 2023-11-18T13:45:17Z | 112 | 0 | null | [
"region:us"
] | 2023-11-18T13:45:17Z | 2023-11-18T09:04:14.000Z | 2023-11-18T09:04:14 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 13840984
num_examples: 15064
download_size: 8346861
dataset_size: 13840984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hackernews_title_upvote"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3727652430534363,
-0.13896973431110382,
0.2969702482223511,
0.48688697814941406,
-0.37410902976989746,
0.32790327072143555,
0.3135901391506195,
0.19748109579086304,
0.9647828340530396,
0.4912456274032593,
-0.5654124617576599,
-0.7602953910827637,
-0.5664682388305664,
-0.2812160849571228... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-e1907042-7494830 | autoevaluate | 2022-06-26T11:26:14Z | 111 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-06-26T11:26:14Z | 2022-06-26T11:25:38.000Z | 2022-06-26T11:25:38 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- clinc_oos
eval_info:
task: multi_class_classification
model: MhF/distilbert-base-uncased-distilled-clinc
metrics: []
dataset_name: clinc_oos
dataset_config: small
dataset_split: test
col_mapping:
text: text
target: intent
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: MhF/distilbert-base-uncased-distilled-clinc
* Dataset: clinc_oos
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.3127722144126892,
-0.33482617139816284,
0.4022495746612549,
0.11272391676902771,
-0.012696703895926476,
-0.07383930683135986,
-0.08405724167823792,
-0.41037705540657043,
0.014753283932805061,
0.37788429856300354,
-0.8677408695220947,
-0.2742251753807068,
-0.8333582878112793,
0.111756667... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hkadxqq/spooky-author-identification | hkadxqq | 2022-09-11T01:43:50Z | 111 | 0 | null | [
"region:us"
] | 2022-09-11T01:43:50Z | 2022-09-11T01:04:53.000Z | 2022-09-11T01:04:53 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/VizWiz_train | Multimodal-Fatima | 2023-03-17T21:22:05Z | 111 | 0 | null | [
"region:us"
] | 2023-03-17T21:22:05Z | 2023-03-17T21:14:38.000Z | 2023-03-17T21:14:38 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: filename
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_type
dtype: string
- name: answerable
dtype: int32
- name: id_image
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: train
num_bytes: 9906518637.0
num_examples: 20523
download_size: 9880125036
dataset_size: 9906518637.0
---
# Dataset Card for "VizWiz_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6938605308532715,
0.08331330865621567,
0.012519619427621365,
0.2964582145214081,
-0.23676787316799164,
-0.045225467532873154,
0.2098783701658249,
-0.06640570610761642,
0.8294991254806519,
0.4749615490436554,
-0.9880982637405396,
-0.5066048502922058,
-0.45243021845817566,
-0.395125329494... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BAAI/COIG-PC | BAAI | 2023-10-14T10:38:40Z | 111 | 212 | null | [
"language:zh",
"license:unknown",
"region:us"
] | 2023-10-14T10:38:40Z | 2023-06-08T05:41:11.000Z | 2023-06-08T05:41:11 | ---
language:
- zh
license: unknown
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: "北京智源人工智能研究院(以下简称“我们”或“研究院”)通过BAAI DataHub(data.baai.ac.cn)和COIG-PC\
\ HuggingFace仓库(https://huggingface.co/datasets/BAAI/COIG-PC)向您提供开源数据集(以下或称“数据集”),您可通过下载的方式获取您所需的开源数据集,并在遵守各原始数据集使用规则前提下,基于学习、研究、商业等目的使用相关数据集。\n\
在您获取(包括但不限于访问、下载、复制、传播、使用等处理数据集的行为)开源数据集前,您应认真阅读并理解本《COIG-PC开源数据集使用须知与免责声明》(以下简称“本声明”)。一旦您获取开源数据集,无论您的获取方式为何,您的获取行为均将被视为对本声明全部内容的认可。\n\
1.\t平台的所有权与运营权\n您应充分了解并知悉,BAAI DataHub和COIG-PC HuggingFace仓库(包括当前版本及全部历史版本)的所有权与运营权归智源人工智能研究院所有,智源人工智能研究院对本平台/本工具及开源数据集开放计划拥有最终解释权和决定权。\n\
您知悉并理解,基于相关法律法规更新和完善以及我们需履行法律合规义务的客观变化,我们保留对本平台/本工具进行不定时更新、维护,或者中止乃至永久终止提供本平台/本工具服务的权利。我们将在合理时间内将可能发生前述情形通过公告或邮件等合理方式告知您,您应当及时做好相应的调整和安排,但我们不因发生前述任何情形对您造成的任何损失承担任何责任。\n\
2.\t开源数据集的权利主张\n为了便于您基于学习、研究、商业的目的开展数据集获取、使用等活动,我们对第三方原始数据集进行了必要的格式整合、数据清洗、标注、分类、注释等相关处理环节,形成可供本平台/本工具用户使用的开源数据集。\n\
您知悉并理解,我们不对开源数据集主张知识产权中的相关财产性权利,因此我们亦无相应义务对开源数据集可能存在的知识产权进行主动识别和保护,但这不意味着我们放弃开源数据集主张署名权、发表权、修改权和保护作品完整权(如有)等人身性权利。而原始数据集可能存在的知识产权及相应合法权益由原权利人享有。\n\
此外,向您开放和使用经合理编排、加工和处理后的开源数据集,并不意味着我们对原始数据集知识产权、信息内容等真实、准确或无争议的认可,您应当自行筛选、仔细甄别,使用经您选择的开源数据集。您知悉并同意,研究院对您自行选择使用的原始数据集不负有任何无缺陷或无瑕疵的承诺义务或担保责任。\n\
3.\t开源数据集的使用限制\n您使用数据集不得侵害我们或任何第三方的合法权益(包括但不限于著作权、专利权、商标权等知识产权与其他权益)。\n获取开源数据集后,您应确保对开源数据集的使用不超过原始数据集的权利人以公示或协议等形式明确规定的使用规则,包括原始数据的使用范围、目的和合法用途等。我们在此善意地提请您留意,如您对开源数据集的使用超出原始数据集的原定使用范围及用途,您可能面临侵犯原始数据集权利人的合法权益例如知识产权的风险,并可能承担相应的法律责任。\n\
4.\t个人信息保护\n基于技术限制及开源数据集的公益性质等客观原因,我们无法保证开源数据集中不包含任何个人信息,我们不对开源数据集中可能涉及的个人信息承担任何法律责任。\n\
如开源数据集涉及个人信息,我们不对您使用开源数据集可能涉及的任何个人信息处理行为承担法律责任。我们在此善意地提请您留意,您应依据《个人信息保护法》等相关法律法规的规定处理个人信息。\n\
为了维护信息主体的合法权益、履行可能适用的法律、行政法规的规定,如您在使用开源数据集的过程中发现涉及或者可能涉及个人信息的内容,应立即停止对数据集中涉及个人信息部分的使用,并及时通过“6.\
\ 投诉与通知”中载明的联系我们。\n5.\t信息内容管理\n我们不对开源数据集可能涉及的违法与不良信息承担任何法律责任。\n如您在使用开源数据集的过程中发现开源数据集涉及或者可能涉及任何违法与不良信息,您应立即停止对数据集中涉及违法与不良信息部分的使用,并及时通过“6.\
\ 投诉与通知”中载明的联系我们。\n6.\t投诉与通知\n如您认为开源数据集侵犯了您的合法权益,您可通过010-50955974联系我们,我们会及时依法处理您的主张与投诉。\n\
为了处理您的主张和投诉,我们可能需要您提供联系方式、侵权证明材料以及身份证明等材料。请注意,如果您恶意投诉或陈述失实,您将承担由此造成的全部法律责任(包括但不限于合理的费用赔偿等)。\n\
7.\t责任声明\n您理解并同意,基于开源数据集的性质,数据集中可能包含来自不同来源和贡献者的数据,其真实性、准确性、客观性等可能会有所差异,我们无法对任何数据集的可用性、可靠性等做出任何承诺。\n\
在任何情况下,我们不对开源数据集可能存在的个人信息侵权、违法与不良信息传播、知识产权侵权等任何风险承担任何法律责任。\n在任何情况下,我们不对您因开源数据集遭受的或与之相关的任何损失(包括但不限于直接损失、间接损失以及可得利益损失等)承担任何法律责任。\n\
8.\t其他\n开源数据集处于不断发展、变化的阶段,我们可能因业务发展、第三方合作、法律法规变动等原因更新、调整所提供的开源数据集范围,或中止、暂停、终止开源数据集提供业务。\n"
extra_gated_fields:
Name: text
Affiliation: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
extra_gated_button_content: Acknowledge license
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
- split: Top50PerTask
path: data/Top50PerTask-*
- split: Top100PerTask
path: data/Top100PerTask-*
- split: Top200PerTask
path: data/Top200PerTask-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: split
dtype: string
- name: task_name_in_eng
dtype: string
- name: task_type
struct:
- name: major
sequence: string
- name: minor
sequence: string
- name: domain
sequence: string
- name: other
dtype: string
- name: filename
dtype: string
splits:
- name: full
num_bytes: 198933665241
num_examples: 321332879
- name: train
num_bytes: 135575192364
num_examples: 208529583
- name: valid
num_bytes: 1703151331
num_examples: 2087767
- name: test
num_bytes: 5763748490
num_examples: 8094740
- name: Top50PerTask
num_bytes: 113823936
num_examples: 63643
- name: Top100PerTask
num_bytes: 222242916
num_examples: 127158
- name: Top200PerTask
num_bytes: 435753269
num_examples: 253558
download_size: 275132519
dataset_size: 342747577547
---
# COIG Prompt Collection
## License
**Default Licensing for Sub-Datasets Without Specific License Declaration**: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default.
**Precedence of Declared Licensing for Sub-Datasets**: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the declared license shall take precedence and govern the usage of that particular sub-dataset.
Users and developers utilizing the COIG-PC Dataset must ensure compliance with the licensing terms as outlined above. It is imperative to review and adhere to the specified licensing conditions of each sub-dataset, as they may vary.
## What is COIG-PC?
The COIG-PC Dataset is a meticulously curated and comprehensive collection of Chinese tasks and data, designed to facilitate the fine-tuning and optimization of language models for Chinese natural language processing (NLP). The dataset aims to provide researchers and developers with a rich set of resources to improve the capabilities of language models in handling Chinese text, which can be utilized in various fields such as text generation, information extraction, sentiment analysis, machine translation, among others.
If you think COIG-PC is too huge, please refer to [COIG-PC-Lite](https://huggingface.co/datasets/BAAI/COIG-PC-Lite) which is a subset of COIG-PC with only 200 samples from each task file.
## Why COIG-PC?
The COIG-PC Dataset is an invaluable resource for the domain of natural language processing (NLP) for various compelling reasons:
**Addressing Language Complexity**: Chinese is known for its intricacy, with a vast array of characters and diverse grammatical structures. A specialized dataset like COIG-PC, which is tailored for the Chinese language, is essential to adequately address these complexities during model training.
**Comprehensive Data Aggregation**: The COIG-PC Dataset is a result of an extensive effort in integrating almost all available Chinese datasets in the market. This comprehensive aggregation makes it one of the most exhaustive collections for Chinese NLP.
**Data Deduplication and Normalization**: The COIG-PC Dataset underwent rigorous manual processing to eliminate duplicate data and perform normalization. This ensures that the dataset is free from redundancy, and the data is consistent and well-structured, making it more user-friendly and efficient for model training.
**Fine-tuning and Optimization**: The dataset’s instruction-based phrasing facilitates better fine-tuning and optimization of language models. This structure allows models to better understand and execute tasks, which is particularly beneficial in improving performance on unseen or novel tasks.
The COIG-PC Dataset, with its comprehensive aggregation, meticulous selection, deduplication, and normalization of data, stands as an unmatched resource for training and optimizing language models tailored for the Chinese language and culture. It addresses the unique challenges of Chinese language processing and serves as a catalyst for advancements in Chinese NLP.
## Who builds COIG-PC?
The bedrock of COIG-PC is anchored in the dataset furnished by stardust.ai, which comprises an aggregation of data collected from the Internet.
And COIG-PC is the result of a collaborative effort involving engineers and experts from over twenty distinguished universities both domestically and internationally. Due to space constraints, it is not feasible to list all of them; however, the following are a few notable institutions among the collaborators:
- Beijing Academy of Artificial Intelligence, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/baai.png" alt= “BAAI” height="100" width="150">
- Peking University, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/pku.png" alt= “PKU” height="100" width="200">
- The Hong Kong University of Science and Technology (HKUST), China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/hkust.png" alt= “HKUST” height="100" width="200">
- The University of Waterloo, Canada
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/waterloo.png" alt= “Waterloo” height="100" width="150">
- The University of Sheffield, United Kingdom
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/sheffield.png" alt= “Sheffield” height="100" width="200">
- Beijing University of Posts and Telecommunications, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/bupt.png" alt= “BUPT” height="100" width="200">
- [Multimodal Art Projection](https://huggingface.co/m-a-p)
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/map.png" alt= “M.A.P” height="100" width="200">
- stardust.ai, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/stardust.png" alt= “stardust.ai” height="100" width="200">
- LinkSoul.AI, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC/resolve/main/assets/linksoul.png" alt= “linksoul.ai” height="100" width="200">
For the detailed list of engineers involved in the creation and refinement of COIG-PC, please refer to the paper that will be published subsequently. This paper will provide in-depth information regarding the contributions and the specifics of the dataset’s development process.
## How to use COIG-PC?
COIG-PC is structured in a **.jsonl** file format. Each line in the file represents a single data record and is structured in JSON (JavaScript Object Notation) format. Below is a breakdown of the elements within each line:
**instruction**: This is a text string that provides the instruction for the task. For example, it might tell the model what to do with the input data.
**input**: This is the input data that the model needs to process. In the context of translation, it would be the text that needs to be translated.
**output**: This contains the expected output data after processing the input. In the context of translation, it would be the translated text.
**split**: Indicates the official split of the original dataset, which is used to categorize data for different phases of model training and evaluation. It can be 'train', 'test', 'valid', etc.
**task_type**: Contains major and minor categories for the dataset. Major categories are broader, while minor categories can be more specific subcategories.
**domain**: Indicates the domain or field to which the data belongs.
**other**: This field can contain additional information or metadata regarding the data record. If there is no additional information, it may be set to null.
### Example
Here is an example of how a line in the COIG-PC dataset might be structured:
```
{
"instruction": "请把下面的中文句子翻译成英文",
"input": "我爱你。",
"output": "I love you.",
"split": "train",
"task_type": {
"major": ["翻译"],
"minor": ["翻译", "中译英"]
},
"domain": ["通用"],
"other": null
}
```
In this example:
**instruction** tells the model to translate the following Chinese sentence into English.
**input** contains the Chinese text "我爱你" which means "I love you".
**output** contains the expected translation in English: "I love you".
**split** indicates that this data record is part of the training set.
**task_type** specifies that the major category is "Translation" and the minor categories are "Translation" and "Chinese to English".
**domain** specifies that this data record belongs to the general domain.
**other** is set to null as there is no additional information for this data record.
## Update: Oct. 8, 2023
- v1.3: Upload all splits to the main branch as arrow datasets. All jsonl files are stored in the raw_json branch now. Remove 152 task files. Add 10 task files. In total, 275 task files updated.
- v1.2: Delete 31 bad task files. Update 99 task files. Rename 2 task files. Add 3 new task files. COIG-PC now has 3339 tasks in total.
- v1.1: Fix 00040-001-000 and 00050-003-000, ignore 00930 and 01373.
- v1.0: First version for arXiv paper.
- v0.6: Upload 28 new tasks. COIG-PC now has 3367 tasks in total.
- v0.5: Upload 202 new tasks. COIG-PC now has 3339 tasks in total.
- v0.4: Upload 1049 new tasks. COIG-PC now has 3137 tasks in total.
- v0.3: Upload 1139 new tasks. COIG-PC now has 2088 tasks in total.
- v0.2: Upload 422 new tasks. COIG-PC now has 949 tasks in total. Add "TopSamplenumPerTask" split where only "Samplenum" samples are used from each task.
- v0.1: Upload 527 tasks.
## COIG-PC Citation
If you want to cite COIG-PC dataset, you could use this:
```
```
## Contact Us
To contact us feel free to create an Issue in this repository.
| [
-0.45609989762306213,
-0.6719550490379333,
-0.07678374648094177,
0.3211532533168793,
-0.26300108432769775,
-0.09736445546150208,
-0.26247692108154297,
-0.5789996981620789,
0.1812213957309723,
0.22427068650722504,
-0.7859131693840027,
-0.5396737456321716,
-0.3209388554096222,
0.071512892842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/scone | tasksource | 2023-06-08T08:58:32Z | 111 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"license:cc0-1.0",
"arxiv:2305.19426",
"region:us"
] | 2023-06-08T08:58:32Z | 2023-06-08T07:22:53.000Z | 2023-06-08T07:22:53 | ---
license: cc0-1.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
dataset_info:
features:
- name: sentence1_edited
dtype: string
- name: sentence2_edited
dtype: string
- name: gold_label_edited
dtype: string
splits:
- name: train
num_bytes: 694572
num_examples: 5010
- name: test
num_bytes: 149006
num_examples: 1000
download_size: 114079
dataset_size: 843578
---
https://github.com/selenashe/ScoNe
NLI subset, original part (excluding one-scope)
```
@misc{she2023scone,
title={ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning},
author={Jingyuan Selena She and Christopher Potts and Samuel R. Bowman and Atticus Geiger},
year={2023},
eprint={2305.19426},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.13345187902450562,
-0.5168943405151367,
0.6389610171318054,
0.05166270211338997,
-0.1263425350189209,
-0.28235048055648804,
-0.24334491789340973,
-0.4198204278945923,
0.3606245219707489,
0.6125720739364624,
-0.7740573883056641,
-0.6502430438995361,
-0.4956287443637848,
0.174859538674354... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PhilSad/celeba-hq-15k | PhilSad | 2023-07-26T15:24:31Z | 111 | 0 | null | [
"region:us"
] | 2023-07-26T15:24:31Z | 2023-07-26T15:22:05.000Z | 2023-07-26T15:22:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 1463302608.0
num_examples: 15000
download_size: 1463113717
dataset_size: 1463302608.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "celeba-hq-15k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5910642743110657,
-0.3871762752532959,
-0.03709246218204498,
0.2483847737312317,
-0.17166119813919067,
0.11261391639709473,
0.1375328004360199,
-0.2736320197582245,
0.8570911288261414,
0.44096964597702026,
-0.7966576218605042,
-0.7793955206871033,
-0.5159772634506226,
-0.115069739520549... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
veezbo/akkadian_english_corpus | veezbo | 2023-09-30T21:32:28Z | 111 | 1 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2023-09-30T21:32:28Z | 2023-09-29T07:22:07.000Z | 2023-09-29T07:22:07 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: English-translated Akkadian Corpus
size_categories:
- 1K<n<10K
---
# Akkadian English Corpus
This dataset is a cleaned English-translated Akkadian language dataset. This dataset can and has been used for text generation tasks, for example to fine-tune LLMs.
## How it was generated
Please visit my [repo](https://github.com/veezbo/akkadian_english_corpus) on Github which explains the steps that were taken to prepare this dataset for a text generation task.
At a high level, these are steps that were taken:
- Sourced a high-quality dataset of English-translated Akkadian by experts
- Enforced a minimum line length
- Removed duplicate lines
- Removed textual notes and other generic notes within parantheses
- Inserted translation notes and literal notes in place (preserving grammar and adding clarity to the corpus)
## Credit
Credit for the aggregation of the raw data belongs to the [Akkademia](https://github.com/gaigutherz/Akkademia/tree/master) project. Specifically, the exact data file used as the starting dataset is linked [here](https://github.com/gaigutherz/Akkademia/blob/master/NMT_input/train.en) and was also used to train their SOTA neural machine translation Akkadian->English model as described in their recent [paper](https://academic.oup.com/pnasnexus/article/2/5/pgad096/7147349) Gutherz et al. 2023 [1].
Credit for the original source of the raw data belongs to the incredible Open Richly Annotated Cuneiform Corpus ([ORACC](http://oracc.org)) project [2]. Specifically, as noted by the Akkademia project above, the RINAP 1, 3, 4, and 5 datasets are the source of the original raw data.
## Citations
[1] Gai Gutherz, Shai Gordin, Luis Sáenz, Omer Levy, Jonathan Berant, Translating Akkadian to English with neural machine translation, PNAS Nexus, Volume 2, Issue 5, May 2023, pgad096, https://doi.org/10.1093/pnasnexus/pgad096
[2] Jamie Novotny, Eleanor Robson, Steve Tinney, Niek Veldhuis, et al. Open Richly Annotated Cuneiform Corpus, http://oracc.org | [
-0.17851969599723816,
-0.5953118205070496,
0.30977943539619446,
-0.07263432443141937,
-0.3504795730113983,
-0.11123276501893997,
-0.3868556320667267,
-0.3139674961566925,
0.20235569775104523,
0.8498019576072693,
-0.4967932105064392,
-0.7015108466148376,
-0.45030346512794495,
0.321930080652... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
olgaptacek/NLP | olgaptacek | 2023-10-29T15:02:08Z | 111 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-10-29T15:02:08Z | 2023-10-23T07:18:53.000Z | 2023-10-23T07:18:53 | ---
license: unknown
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ostapeno/qa-openai_batched_icl0_clen512_maxD-1_maxC2500_0_cleaned | ostapeno | 2023-10-26T18:47:50Z | 111 | 0 | null | [
"region:us"
] | 2023-10-26T18:47:50Z | 2023-10-26T18:47:28.000Z | 2023-10-26T18:47:28 | ---
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: docno
dtype: string
- name: subject
dtype: string
- name: icl_examples
dtype: 'null'
- name: author_instr
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: formal_logic
num_bytes: 39657535.86953515
num_examples: 13565
- name: machine_learning
num_bytes: 41683534.56895187
num_examples: 14258
- name: global_facts
num_bytes: 37093609.665511385
num_examples: 12688
- name: abstract_algebra
num_bytes: 39666306.426675476
num_examples: 13568
- name: high_school_physics
num_bytes: 37233938.5797567
num_examples: 12736
- name: college_biology
num_bytes: 37663695.87963297
num_examples: 12883
- name: high_school_government_and_politics
num_bytes: 37605225.49869743
num_examples: 12863
- name: prehistory
num_bytes: 37163774.122634046
num_examples: 12712
- name: security_studies
num_bytes: 36520599.93234302
num_examples: 12492
- name: sociology
num_bytes: 36140542.45626196
num_examples: 12362
download_size: 55458289
dataset_size: 380428763.0
---
# Dataset Card for "qa-openai_batched_icl0_clen512_maxD-1_maxC2500_0_cleaned_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.535457193851471,
-0.028222711756825447,
0.0563238151371479,
0.14830447733402252,
-0.4081626832485199,
-0.29023653268814087,
0.09447672963142395,
-0.02314237505197525,
0.6292781233787537,
0.6952637434005737,
-0.7856265306472778,
-0.7146309018135071,
-0.2924638092517853,
-0.05667561292648... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mteb/raw_arxiv | mteb | 2022-09-27T19:12:40Z | 110 | 1 | null | [
"language:en",
"region:us"
] | 2022-09-27T19:12:40Z | 2022-05-10T09:43:45.000Z | 2022-05-10T09:43:45 | ---
language:
- en
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_esquad | lmqg | 2022-12-02T18:52:05Z | 110 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad_es",
"language:es",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T18:52:05Z | 2022-06-02T23:41:06.000Z | 2022-06-02T23:41:06 | ---
license: cc-by-4.0
pretty_name: SQuAD-es for question generation
language: es
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_esquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SQuAD-es](https://huggingface.co/datasets/squad_es) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'comedia musical',
'question': '¿Qué género de película protagonizó Beyonce con Cuba Gooding, Jr?',
'sentence': 'en la comedia musical ',
'paragraph': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr., en la comedia musical The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
'sentence_answer': 'en la <hl> comedia musical <hl> ',
'paragraph_answer': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr., en la <hl> comedia musical <hl> The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
'paragraph_sentence': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr. , <hl> en la comedia musical <hl> The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|77025| 10570 |10570|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.412653386592865,
-0.6778170466423035,
0.29754793643951416,
0.22492849826812744,
-0.1449234038591385,
0.08900550752878189,
-0.041673727333545685,
-0.12107676267623901,
0.14059384167194366,
0.47352951765060425,
-0.9816229939460754,
-0.593352198600769,
-0.3543032109737396,
0.16993141174316... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
facebook/pmd | facebook | 2022-08-09T23:51:39Z | 110 | 28 | pmd | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2112.04482",
"arxiv:2111.11431",
"region:us... | 2022-08-09T23:51:39Z | 2022-06-20T00:52:47.000Z | 2022-06-20T00:52:47 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: pmd
pretty_name: PMD
extra_gated_prompt: |
By clicking on “Access repository” below, you also agree to individual licensing terms for each of the subset datasets of the PMD as noted at https://huggingface.co/datasets/facebook/pmd#additional-information.
---
# Dataset Card for PMD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Compared to original FLAVA paper](#compared-to-original-flava-paper)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PMD homepage](https://flava-model.github.io/)
- **Repository:** [PMD repository](https://huggingface.co/datasets/facebook/pmd)
- **Paper:** [FLAVA: A Foundational Language And Vision Alignment Model
](https://arxiv.org/abs/2112.04482)
- **Leaderboard:**
- **Point of Contact:** [Amanpreet Singh](mailto:amanpreet@nyu.edu)
### Dataset Summary
Introduced in the FLAVA paper, Public Multimodal Dataset (PMD) is a collection of publicly-available image-text pair datasets. PMD contains 70M image-text pairs in total with 68M unique images. The dataset contains pairs from Conceptual Captions, Conceptual Captions 12M, WIT, Localized Narratives, RedCaps, COCO, SBU Captions, Visual Genome and a subset of YFCC100M dataset.
If you use PMD, please cite the original FLAVA paper as follows, along with the individual datasets (!! - see below for references):
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
```
You can load this dataset by first logging into Hugging Face using `huggingface-cli login` and then running the following commands:
```py
from datasets import load_dataset
pmd = load_dataset("facebook/pmd", use_auth_token=True)
```
You can also load the dataset in streaming mode if you don't want to download the big dataset files (> 50GB locally without the images):
```py
pmd = load_dataset("facebook/pmd", use_auth_token=True, streaming=True)
```
### Dataset Preprocessing
This dataset doesn't download all of the images locally by default. Instead, it exposes URLs for some of the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_data, timeout=None, retries=0):
image_url, image = image_data
if image is not None:
return image
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, zip(batch["image_url"], batch["image"])))
return batch
num_threads = 20
dset = load_dataset("pmd", use_auth_token=True)
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
#### Save to disk
You can also save the dataset to disk for faster and direct loading next time but beware of the space required:
```py
dset.save_to_disk(</path/to/save>)
```
#### Load Subsets
You can also download a specific set from the PMD dataset by using
```py
dset = load_dataset("pmd", <choice>, use_auth_token=True)
```
The choices are `
```
"all","coco","sbu", "wit", "localized_narratives","conceptual_captions","visual_genome","conceptual_captions_12M","redcaps","yfcc100M_subset", "localized_narratives_openimages","localized_narratives_ade20k", "localized_narratives_coco"
```
#### Flickr30K Localized Narratives Subset
The Flickr30K subset of Localized Narratives is not included by default as it requires a manual download. You can include it by downloading the tar file from [here](http://shannon.cs.illinois.edu/DenotationGraph/data/index.html) after signing an agreement to `</path/to/Downloads>` and then loading it whole PMD or localized narratives subset by:
```py
dset = load_dataset("pmd", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
# Load LN subset only
dset = load_dataset("pmd", "localized_narratives", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
```
#### Facing issues?
If you are facing issues, you can try loading a specific revision of the repo by using:
```py
dset = load_dataset("pmd", use_auth_token=True, revision="311cd48")
```
### Supported Tasks and Leaderboards
In the FLAVA paper, the dataset has been used to pretrain the FLAVA model as a source of well-aligned image-text pairs. This allows having a generic vision-and-language model which can be fine-tuned for a variety of tasks.
We anticipate that the dataset can be used to train deep neural networks that perform image captioning and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). We also anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subsets in PMD use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in PMD represents a single image-text pair:
```
{
'image_url': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCFF86A1E80>,
'text': 'A woman wearing a net on her head cutting a cake. ',
'source': 'coco',
'meta': '{\n "annotation": [\n "A woman wearing a net on her head cutting a cake. "\n ],\n "image_path": "zip:/val2014/COCO_val2014_000000522418.jpg::http:/images.cocodataset.org/zips/val2014.zip"\n}'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the text. Can be `None` if image is locally available.
- `image`: A PIL Image object for the image associated with the text. Can be `None` if image is not locally available.
- `text`: `str`, A textual description corresponding to the image.
- `source`: `str`, The PMD subset which this pair is from.
- `meta`: `str`, A json representation of the original annotation from the dataset.
### Data Splits
All the data is contained in the training set. The training set has nearly 70M instances.
We intend for this dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Thus, all of the instances should be used for pretraining. If required, we specifically make sure that there is no overlap with Karpathy's COCO validation set so users can use that subset for any validation purposes. Users can also load Karpathy's val subset by specifying the "validation" split while loading PMD. This will also load other "validation" splits for some subsets, if they are available.
## Dataset Creation
### Curation Rationale
From the paper:
> Purely contrastive methods, however, also have important shortcomings. Their cross-modal nature does not make them easily usable on multimodal problems that require dealing with both modalities at the same time. They require large corpora, which for both CLIP and ALIGN have not been made accessible to the research community and the details of which remain shrouded in mystery, notwithstanding well-known issues with the construction of such datasets
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
- For the YFCC100M dataset, we filter the image-text data by discarding non-English captions and only keeping captions that contain more than two words from the description field of each image, if this does not pass our filters we consider the title field. Other than that, we did not do any additional filtering.
- For the VisualGenome, COCO and Localized Narratives subsets, we remove any overlaps with Karpathy's COCO val and test sets.
- For Localized Narratives, we split the original caption which is a paragraph into multiple captions by using spaCy library and take the cartesan product leading to each sample as a separate image-text pair.
#### Compared to original FLAVA paper
The PMD dataset in this repo doesn't correspond 1:1 exactly to the original PMD dataset used in the [FLAVA](https://arxiv.org/abs/2112.04482) paper though this repo is built by the same authors. This is due to difficulty in reproducing WiT and YFCC100M subsets exactly. This repo in general contains more data than the PMD in the FLAVA paper and hence should probably result in better performance.
#### Who are the source language producers?
Please refer to the original dataset papers to understand where the content is coming from.
### Annotations
#### Annotation process
The dataset is a combination of existing public datasets with some filtering applied on top so there is no annotation process involved.
#### Who are the annotators?
Please refer to the original dataset papers to understand where the content is coming from.
### Personal and Sensitive Information
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
PMD is noisy by design since image-text pairs on the internet are noisy and unstructured. Though, since it contains sources such as COCO, Visual Genome, and Localized Narratives which are hand-curated by annotators, it has a lot of well-aligned data as well. So, it is definitely more aligned compared to e.g. LAION.
Some instances may also have duplicate images and captions but should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
Not that the authors know of. Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Additional Information
### Dataset Curators
The authors of the original dataset papers, as well as the authors of the FLAVA paper (Amanpreet, Ronghang, Vedanuj, Guillaume, Wojciech, Marcus and Douwe).
### Licensing Information
Here are the individual licenses from each of the datasets that apply if you use this dataset:
#### COCO
The annotations in the COCO dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
#### Conceptual Captions
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
#### WIT
This data is available under the [Creative Commons Attribution-ShareAlike 3.0 Unported](LICENSE) license.
#### Visual Genome
Visual Genome by Ranjay Krishna et al is licensed under a Creative Commons Attribution 4.0 International License.
#### Localized Narratives
All the annotations available through this website are released under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. You are free to redistribute and modify the annotations, but we ask you to please keep the original attribution to our paper.
#### YFCC100M
Use of the original media files is subject to the Creative Commons licenses chosen by their creators/uploaders. License information for each media file can be found within [the YFCC100M metadata](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/#yfcc100m). Use of the dataset is subject to the relevant Webscope License Agreement, which you need to agree to if you use this dataset.
#### RedCaps
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at https://www.redditinc.com/policies.
Similar to RedCaps:
> PMD should only be used for non-commercial research. PMD should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of PMD are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
Please cite the main FLAVA paper in which PMD was introduced along with each of the subsets used in PMD as follows:
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
@article{chen2015microsoft,
title={Microsoft coco captions: Data collection and evaluation server},
author={Chen, Xinlei and Fang, Hao and Lin, Tsung-Yi and Vedantam, Ramakrishna and Gupta, Saurabh and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
journal={arXiv preprint arXiv:1504.00325},
year={2015}
}
@inproceedings{ordonez2011sbucaptions,
Author = {Vicente Ordonez and Girish Kulkarni and Tamara L. Berg},
Title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
Booktitle = {Neural Information Processing Systems ({NIPS})},
Year = {2011},
}
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
@inproceedings{sharma2018conceptual,
title={Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning},
author={Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={2556--2565},
year={2018}
}
@inproceedings{changpinyo2021conceptual,
title={Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts},
author={Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3558--3568},
year={2021}
}
@inproceedings{ponttuset2020localized,
author = {Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari},
title = {Connecting Vision and Language with Localized Narratives},
booktitle = {ECCV},
year = {2020}
}
@article{thomee2016yfcc100m,
title={YFCC100M: The new data in multimedia research},
author={Thomee, Bart and Shamma, David A and Friedland, Gerald and Elizalde, Benjamin and Ni, Karl and Poland, Douglas and Borth, Damian and Li, Li-Jia},
journal={Communications of the ACM},
volume={59},
number={2},
pages={64--73},
year={2016},
publisher={ACM New York, NY, USA}
}
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@aps](https://github.com/apsdehal), [Thomas Wang](https://huggingface.co/TimeRobber), and [@VictorSanh](https://huggingface.co/VictorSanh) for adding this dataset. | [
-0.5551300644874573,
-0.652538537979126,
0.03564465790987015,
0.2168930172920227,
-0.3868687152862549,
-0.06918193399906158,
-0.11522149294614792,
-0.35995277762413025,
0.28181585669517517,
0.5603950619697571,
-0.6918100714683533,
-0.5360130667686462,
-0.5171694755554199,
0.208714678883552... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-samsum-samsum-bf100b-1520255007 | autoevaluate | 2022-09-21T02:23:16Z | 110 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T02:23:16Z | 2022-09-21T02:15:16.000Z | 2022-09-21T02:15:16 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: ['rouge']
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | [
-0.35565653443336487,
-0.12213422358036041,
0.17954502999782562,
0.26600345969200134,
-0.07290307432413101,
-0.1384354829788208,
-0.005853169597685337,
-0.3789929449558258,
0.3149338662624359,
0.44476786255836487,
-1.027416467666626,
-0.18990029394626617,
-0.6850699782371521,
0.01353981532... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1400000-1450000 | tomekkorbak | 2022-10-04T23:57:23Z | 110 | 0 | null | [
"region:us"
] | 2022-10-04T23:57:23Z | 2022-10-04T23:57:15.000Z | 2022-10-04T23:57:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1350000-1400000 | tomekkorbak | 2022-10-05T00:06:19Z | 110 | 0 | null | [
"region:us"
] | 2022-10-05T00:06:19Z | 2022-10-05T00:06:11.000Z | 2022-10-05T00:06:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Salesforce/rose | Salesforce | 2023-06-07T21:00:52Z | 110 | 7 | null | [
"language:en",
"region:us"
] | 2023-06-07T21:00:52Z | 2022-12-14T20:13:26.000Z | 2022-12-14T20:13:26 | ---
language:
- en
---
# ROSE 🌹
This repo contiains the RoSE benchmark of our paper "Revisiting the Gold Standard:
Grounding Summarization Evaluation with Robust Human Evaluation".
Please visit [here](https://yale-lily.github.io/ROSE/) for a demo page of this project.
### ACU Annotations
RoSE benchmark contains system outputs annotated with our ACU protocol.
It contains four parts:
- CNNDM, test set annotations
- CNNDM, validation set annotations
- XSum, test set annotations
- SamSum, test set annotations
We summarize the statistics below.
| Dataset | Split | #Doc. | #Sys. | #Total Summ. | HF Name
| --- | --- | --- | --- | --- | --- |
| CNNDM | Test | 500 | 12 | 6000 | `cnndm_test` |
| CNNDM | Validation | 1000 | 8 | 8000 | `cnndm_validation` |
| XSum | Test | 500 | 8 | 4000 | `xsum` |
| SamSum | Test | 500 | 8 | 4000 | `samsum` |
### Human Annotations with Different Evaluation Protocols
We have system outputs annotated with four different human evaluation protocols in total.
We summarize them below.
| Protocol | w/ Input Document | w/ Reference Summary | Fine-grained |
| --- | --- | --- | --- |
| Prior | ✗ | ✗ | ✗ |
| Ref-free | ✓ | ✗ | ✗ |
| Ref-based | ✗ | ✓ | ✗ |
| ACU | ✗ | ✓ | ✓ |
We annotated two sets of system summaries.
1. Summaries of 12 fine-tuned systems. The huggingface data split name is `cnndm_protocol`.
2. Zero-shot summaries from large langauge models (GPT3, T0), together with summaries from BRIO and BART. The huggingface data split name is `cnndm_protocol_gpt3`.
| [
-0.5044113993644714,
-0.41126129031181335,
0.01708885282278061,
0.1940893679857254,
-0.0780734047293663,
-0.11767864227294922,
-0.146683931350708,
-0.4668864607810974,
0.3223966062068939,
0.3456762433052063,
-0.3119603097438812,
-0.512177050113678,
-0.5518437027931213,
0.27240341901779175,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuroneko5943/stock11 | kuroneko5943 | 2023-01-16T04:11:18Z | 110 | 6 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:apache-2.0",
"stock",
"region:u... | 2023-01-16T04:11:18Z | 2023-01-10T12:13:05.000Z | 2023-01-10T12:13:05 | ---
annotations_creators:
- machine-generated
language:
- zh
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stock11
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- stock
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gholamreza/pquad | Gholamreza | 2023-02-18T15:00:06Z | 110 | 3 | squad | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fa",
"license:cc-by-sa-4.0",
"regio... | 2023-02-18T15:00:06Z | 2023-02-18T14:02:25.000Z | 2023-02-18T14:02:25 | ---
pretty_name: PQuAD
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- fa
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: squad
train-eval-index:
- config: pquad
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: pquad
name: PQuAD
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: pquad
splits:
- name: train
num_bytes: ...
num_examples: 63994
- name: validation
num_bytes: ...
num_examples: 7976
- name: test
num_bytes: ...
num_examples: 8002
download_size: ...
dataset_size: ...
---
# Dataset Card for "pquad"
## PQuAD Description
**THIS IS A NON-OFFICIAL VERSION OF THE DATASET UPLOADED TO HUGGINGFACE BY [Gholamreza Dar](https://huggingface.co/Gholamreza)**
*The original repository for the dataset is https://github.com/AUT-NLP/PQuAD*
PQuAD is a crowd- sourced reading comprehension dataset on Persian Language. It includes 80,000
questions along with their answers, with 25% of the questions being unanswerable. As a reading
comprehension dataset, it requires a system to read a passage and then answer the given questions
from the passage. PQuAD's questions are based on Persian Wikipedia articles and cover a wide
variety of subjects. Articles used for question generation are quality checked and include few
number of non-Persian words.
## Dataset Splits
The dataset is divided into three categories including train, validation, and test sets and the
statistics of these sets are as follows:
```
+----------------------------+-------+------------+------+-------+
| | Train | Validation | Test | Total |
+----------------------------+-------+------------+------+-------+
| Total Questions | 63994 | 7976 | 8002 | 79972 |
| Unanswerable Questions | 15721 | 1981 | 1914 | 19616 |
| Mean # of paragraph tokens | 125 | 121 | 124 | 125 |
| Mean # of question tokens | 10 | 11 | 11 | 10 |
| Mean # of answer tokens | 5 | 6 | 5 | 5 |
+----------------------------+-------+------------+------+-------+
```
Workers were encouraged to use paraphrased sentences in their questions and avoid choosing the
answers comprising non-Persian words. Another group of crowdworkers validated the questions and
answers in the test and validation set to ensure their quality. They also provided additional
answers to the questions in test and validation sets if possible. This helps to consider all
possible types of answers and have a better evaluation of models.
PQuAD is stored in the JSON format and consists of passages where each passage is linked to a
set of questions. Answer(s) of the questions is specified with answer's span (start and end
point of answer in paragraph). Also, the unanswerable questions are marked as unanswerable.
## Results
The estimated human performance on the test set is 88.3% for F1 and 80.3% for EM. We have
evaluated PQuAD using two pre-trained transformer-based language models, namely ParsBERT
(Farahani et al., 2021) and XLM-RoBERTa (Conneau et al., 2020), as well as BiDAF (Levy et
al., 2017) which is an attention-based model proposed for MRC.
```
+-------------+------+------+-----------+-----------+-------------+
| Model | EM | F1 | HasAns_EM | HasAns_F1 | NoAns_EM/F1 |
+-------------+------+------+-----------+-----------+-------------+
| BNA | 54.4 | 71.4 | 43.9 | 66.4 | 87.6 |
| ParsBERT | 68.1 | 82.0 | 61.5 | 79.8 | 89.0 |
| XLM-RoBERTa | 74.8 | 87.6 | 69.1 | 86.0 | 92.7 |
| Human | 80.3 | 88.3 | 74.9 | 85.6 | 96.8 |
+-------------+------+------+-----------+-----------+-------------+
```
## LICENSE
PQuAD is developed by Mabna Intelligent Computing at Amirkabir Science and Technology Park with
collaboration of the NLP lab of the Amirkabir University of Technology and is supported by the
Vice Presidency for Scientific and Technology. By releasing this dataset, we aim to ease research
on Persian reading comprehension and the development of Persian question answering systems.
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/
[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png
[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg
# Dataset Card for "pquad" | [
-0.6269329190254211,
-0.8181945085525513,
0.3412850499153137,
0.2178245484828949,
-0.17983794212341309,
0.04921197518706322,
-0.07449977844953537,
0.06250335276126862,
0.07322154194116592,
0.4611361026763916,
-0.5459779500961304,
-0.4615735709667206,
-0.35371023416519165,
0.340835422277450... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teelinsan/camoscio_cleaned | teelinsan | 2023-11-06T18:03:28Z | 110 | 1 | null | [
"language:it",
"region:us"
] | 2023-11-06T18:03:28Z | 2023-04-05T15:42:59.000Z | 2023-04-05T15:42:59 | ---
language: it
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 20903457.244625207
num_examples: 50245
download_size: 13083590
dataset_size: 20903457.244625207
---
# Dataset Card for "camoscio_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.446868896484375,
-0.0404939204454422,
0.08360325545072556,
0.033395081758499146,
-0.5169499516487122,
0.0380924716591835,
0.25658223032951355,
-0.3160189986228943,
0.8880683183670044,
0.689066469669342,
-0.9038402438163757,
-0.8325232863426208,
-0.4439278244972229,
-0.30290350317955017,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sam-mosaic/full-hh-rlhf-chatml | sam-mosaic | 2023-07-18T00:28:22Z | 110 | 2 | null | [
"language:en",
"region:us"
] | 2023-07-18T00:28:22Z | 2023-04-26T00:27:24.000Z | 2023-04-26T00:27:24 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 155301546
num_examples: 147351
- name: test
num_bytes: 16963667
num_examples: 16255
download_size: 68690705
dataset_size: 172265213
---
# Dataset Card for "full-hh-rlhf-chatml-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5247677564620972,
-0.6827741265296936,
0.20294448733329773,
0.3604907989501953,
-0.21663880348205566,
0.08827590197324753,
-0.10334879159927368,
-0.26557520031929016,
0.952696681022644,
0.7375037670135498,
-0.8874675035476685,
-0.9012860059738159,
-0.5920429825782776,
-0.144778072834014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
totally-not-an-llm/EverythingLM-data-V3 | totally-not-an-llm | 2023-09-11T02:54:38Z | 110 | 16 | null | [
"license:mit",
"region:us"
] | 2023-09-11T02:54:38Z | 2023-09-08T01:52:43.000Z | 2023-09-08T01:52:43 | ---
license: mit
---
# EverythingLM V3 Dataset
**EverythingLM V3** is a diverse instruct dataset consisting of roughly 1.1k of sysprompt-user-assistant triads. These were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Diferences from V2
* Used march gpt-4 instead of latest
* Dynamically adjusted temperature based on the task
* Much more diverse (8 new categories)
* Flesch hints
* 10% more data
* Better filtering
* Overall refined dataset generation pipeline
### Category distribution

\*These values represent the data as generated, but slight filtering has been applied, so values might be a bit different. | [
-0.3599257171154022,
-0.3789980709552765,
0.4204085171222687,
-0.08460879325866699,
-0.38584762811660767,
-0.12288113683462143,
0.4172137677669525,
-0.46181392669677734,
0.19191060960292816,
0.7027303576469421,
-0.8061946034431458,
-0.7231189012527466,
-0.3201015293598175,
0.06703829765319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shreyasrp/Text-to-SQL | Shreyasrp | 2023-09-28T17:04:10Z | 110 | 0 | null | [
"region:us"
] | 2023-09-28T17:04:10Z | 2023-09-28T17:02:58.000Z | 2023-09-28T17:02:58 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/ultrachat | vietgpt | 2023-11-03T10:35:43Z | 110 | 0 | null | [
"region:us"
] | 2023-11-03T10:35:43Z | 2023-11-03T10:28:43.000Z | 2023-11-03T10:28:43 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_1024
num_bytes: 1395429967.5845509
num_examples: 270812
- name: train_2048
num_bytes: 2779271371.8960648
num_examples: 539375
- name: train_4096
num_bytes: 3360683349.1806855
num_examples: 652210
download_size: 3454050489
dataset_size: 7535384688.661301
configs:
- config_name: default
data_files:
- split: train_1024
path: data/train_1024-*
- split: train_2048
path: data/train_2048-*
- split: train_4096
path: data/train_4096-*
---
# Dataset Card for "ultrachat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5197355151176453,
-0.4682696759700775,
0.13905860483646393,
0.12446847558021545,
-0.33432674407958984,
0.14185196161270142,
0.30839619040489197,
-0.2997829020023346,
0.9359764456748962,
0.486726313829422,
-0.8611190915107727,
-0.6926054358482361,
-0.19206537306308746,
-0.404416531324386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/hotpotqa-generated-queries | BeIR | 2022-10-23T06:15:30Z | 109 | 0 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:15:30Z | 2022-06-17T13:20:35.000Z | 2022-06-17T13:20:35 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-samsum-samsum-f593d1-14645991 | autoevaluate | 2022-08-31T01:18:28Z | 109 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T01:18:28Z | 2022-08-31T00:52:23.000Z | 2022-08-31T00:52:23 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | [
-0.31333306431770325,
-0.10268408805131912,
0.3777298629283905,
0.1894034445285797,
-0.17671018838882446,
-0.06241088733077049,
-0.0021348081063479185,
-0.42800676822662354,
0.25124451518058777,
0.4217090308666229,
-0.9184761047363281,
-0.31039896607398987,
-0.7171832323074341,
0.012452458... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1450000-1500000 | tomekkorbak | 2022-10-04T23:56:05Z | 109 | 0 | null | [
"region:us"
] | 2022-10-04T23:56:05Z | 2022-10-04T23:55:57.000Z | 2022-10-04T23:55:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1300000-1350000 | tomekkorbak | 2022-10-05T00:06:32Z | 109 | 0 | null | [
"region:us"
] | 2022-10-05T00:06:32Z | 2022-10-05T00:06:23.000Z | 2022-10-05T00:06:23 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mxeval/multi-humaneval | mxeval | 2023-03-20T19:20:48Z | 109 | 3 | null | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"mxeval",
"code-generation",
"multi-humaneval",
"humaneval",
"arxiv:2210.14868",
"region:us"
] | 2023-03-20T19:20:48Z | 2023-03-14T21:37:18.000Z | 2023-03-14T21:37:18 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
splits:
- name: multi-humaneval_python
num_bytes: 165716
num_examples: 164
download_size: 67983
dataset_size: 165716
license: apache-2.0
task_categories:
- text-generation
tags:
- mxeval
- code-generation
- multi-humaneval
- humaneval
pretty_name: multi-humaneval
language:
- en
---
# Multi-HumanEval
## Table of Contents
- [multi-humaneval](#multi-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# multi-humaneval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/multi-humaneval")
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/multi-humaneval", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "HumanEval/0",
"language": "python",
"prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n",
"test": "\n\nMETADATA = {\n \"author\": \"jt\",\n \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n",
"entry_point": "has_close_elements",
"canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n",
"description": "Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- HumanXEval
- Python
- Csharp
- Go
- Java
- Javascript
- Kotlin
- Perl
- Php
- Ruby
- Scala
- Swift
- Typescript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> humaneval_python = load_dataset("mxeval/multi-humaneval", "python", split="test")
>>> example_problem = humaneval_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'HumanEval/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.636878967285156}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Dataset Curators
AWS AI Labs
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi) | [
-0.44534820318222046,
-0.6333296895027161,
0.21997924149036407,
0.3178040385246277,
0.10116156190633774,
0.03617110848426819,
-0.3158940076828003,
-0.22532688081264496,
0.0018888923805207014,
0.37522828578948975,
-0.5600725412368774,
-0.7436904311180115,
-0.40894559025764465,
0.21722115576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/HuatuoGPT-sft-data-v1 | FreedomIntelligence | 2023-06-01T11:05:15Z | 109 | 50 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-01T11:05:15Z | 2023-05-25T08:09:22.000Z | 2023-05-25T08:09:22 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/timeseries-1m-stocks | edarchimbaud | 2023-11-21T10:02:43Z | 109 | 1 | null | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | 2023-11-21T10:02:43Z | 2023-05-29T13:50:59.000Z | 2023-05-29T13:50:59 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: datetime
dtype: timestamp[ns]
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 183543516
num_examples: 3283794
download_size: 83707584
dataset_size: 183543516
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-1mn-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The "timeseries-1mn-sp500" dataset provides one-minute time-series data for the S&P 500 index constituents.
### Supported Tasks and Leaderboards
This dataset is suitable for tasks such as time-series forecasting, volatility prediction, and high-frequency trading strategy development.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): The ticker symbol or abbreviation used to identify the company.
- datetime (timestamp): The date and time of the stock quote, in nanoseconds.
- open (float64): The opening price of the stock at the given datetime.
- high (float64): The highest price of the stock during the given minute.
- low (float64): The lowest price of the stock during the given minute.
- close (float64): The closing price of the stock at the given datetime.
- volume (float64): The volume of the stock traded during the given minute.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The "timeseries-1mn-sp500" dataset was created to support high-frequency trading algorithms and time-series forecasting models.
### Source Data
#### Initial Data Collection and Normalization
The data was sourced from the web and normalized.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The timeseries-1mn-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The timeseries-1mn-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.6160652041435242,
-0.3723445534706116,
-0.14262953400611877,
0.5172303915023804,
-0.3475489020347595,
0.02425421215593815,
0.05749769136309624,
-0.138374462723732,
0.7380400896072388,
0.33140334486961365,
-1.2259576320648193,
-0.7680742740631104,
-0.5234858393669128,
-0.0119346790015697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TigerResearch/tigerbot-gsm-8k-en | TigerResearch | 2023-05-31T01:38:37Z | 109 | 0 | null | [
"language:en",
"license:mit",
"region:us"
] | 2023-05-31T01:38:37Z | 2023-05-30T15:44:37.000Z | 2023-05-30T15:44:37 | ---
license: mit
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 基于gsm8k数据集加工而来
GSM8K(Grade School Math 8K)是一个包含 8.5K 高质量语言多样化小学数学单词问题的数据集。创建数据集是为了支持对需要多步推理的基本数学问题的问答任务。
原始来源:[https://huggingface.co/datasets/gsm8k](https://huggingface.co/datasets/gsm8k)
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-gsm-8k-en')
``` | [
-0.40969711542129517,
-0.3823505640029907,
0.027621062472462654,
0.2676045000553131,
-0.31371286511421204,
0.05172079801559448,
-0.03609197959303856,
0.006701203528791666,
0.6245484948158264,
0.1736486554145813,
-0.32093191146850586,
-0.5676502585411072,
-0.5686726570129395,
0.067487373948... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RealTimeData/wikitext_latest | RealTimeData | 2023-11-27T00:51:58Z | 109 | 0 | null | [
"region:us"
] | 2023-11-27T00:51:58Z | 2023-08-19T20:04:41.000Z | 2023-08-19T20:04:41 | ---
{}
---
# Latest Wikitext
You could always access the latest Wikipedia texts via this dataset.
We update the dataset weekly, on every Sunday. So the dataset always provides the latest Wikipedia texts from the last week.
The current dataset on main branch contains the latest wikipedia texts created from 2023-11-13 to 2023-11-20.
The data collection is conducted on 2023-11-27.
Use the dataset via:
```
ds = datasets.load_dataset('RealTimeData/wikitext_latest')
```
# Previsou versions
You could access previous versions by requesting different branches.
For example, you could find the 2023-08-12 version via:
```
ds = datasets.load_dataset('RealTimeData/wikitext_latest', revision = '2023-08-12')
```
Check all available versions by clicking the "Files and versions" button on the top bar.
| [
-0.6052226424217224,
-0.38294684886932373,
0.3012738525867462,
0.1531004160642624,
-0.36794859170913696,
-0.013941384851932526,
-0.28184136748313904,
-0.7518946528434753,
0.5807453393936157,
0.5786906480789185,
-1.1379876136779785,
-0.32952189445495605,
-0.31626471877098083,
0.427734941244... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nampdn-ai/tiny-lessons | nampdn-ai | 2023-08-29T05:58:57Z | 109 | 11 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"source_datasets:nampdn-ai/tiny-en",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-08-29T05:58:57Z | 2023-08-25T08:11:13.000Z | 2023-08-25T08:11:13 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
pretty_name: Tiny Lessons
size_categories:
- 10K<n<100K
source_datasets:
- nampdn-ai/tiny-en
---
# Tiny Lessons
The dataset is designed to help causal language models learn more effectively from raw web text. It is augmented from public web text and contains two key components: theoretical concepts and practical examples.
The theoretical concepts provide a foundation for understanding the underlying principles and ideas behind the information contained in the raw web text. The practical examples demonstrate how these theoretical concepts can be applied in real-world situations.
This dataset is an ideal resource for ML researchers working with causal language models. I hope you find it useful and welcome any feedback or suggestions you may have.
[View Nomic Atlas](https://atlas.nomic.ai/map/af5b399c-caa4-4ea9-8efc-7165972de209/c096774c-f979-4337-a5ea-08ea18be9fa0) | [
-0.1643323004245758,
-0.908229649066925,
0.5619146227836609,
-0.03695516288280487,
-0.15737777948379517,
-0.29654112458229065,
-0.24323828518390656,
-0.32871246337890625,
-0.27369746565818787,
0.6264523267745972,
-0.6757074594497681,
-0.5164805054664612,
-0.15873929858207703,
-0.0276682879... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AudioDecBenchmark/librispeech_asr | AudioDecBenchmark | 2023-11-16T18:23:16Z | 109 | 0 | null | [
"region:us"
] | 2023-11-16T18:23:16Z | 2023-11-16T17:53:45.000Z | 2023-11-16T17:53:45 | ---
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 585566013
num_examples: 28539
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 585566013
num_examples: 28539
- name: academicodec_hifi_24k_320d
num_bytes: 875207613
num_examples: 28539
- name: audiodec_24k_320d
num_bytes: 1861784589
num_examples: 28539
- name: dac_16k
num_bytes: 3591614845
num_examples: 28539
- name: dac_24k
num_bytes: 10062423533
num_examples: 28539
- name: dac_44k
num_bytes: 2982824761
num_examples: 28539
- name: encodec_24k
num_bytes: 441025925
num_examples: 28539
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 4649508077
num_examples: 28539
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 4649508077
num_examples: 28539
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 4647663597
num_examples: 28539
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 2330511341
num_examples: 28539
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 4647663597
num_examples: 28539
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 4647663597
num_examples: 28539
- name: speech_tokenizer_16k
num_bytes: 1166450829
num_examples: 28539
download_size: 7544903765
dataset_size: 47724982407
---
# Dataset Card for "librispeech_asr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5808478593826294,
-0.1274283081293106,
0.02622130513191223,
0.20501159131526947,
-0.21268627047538757,
-0.022895684465765953,
0.2088983803987503,
-0.21366102993488312,
0.8432310819625854,
0.4606691300868988,
-0.7298866510391235,
-0.6203452944755554,
-0.6001136898994446,
-0.4009852111339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BennoKrojer/ImageCoDe | BennoKrojer | 2022-05-13T21:26:08Z | 108 | 1 | null | [
"license:afl-3.0",
"arxiv:2203.15867",
"region:us"
] | 2022-05-13T21:26:08Z | 2022-05-05T21:50:13.000Z | 2022-05-05T21:50:13 | ---
license: afl-3.0
---
# Dataset Card for ImageCoDe
To get started quickly, load descriptions via:
```
from datasets import load_dataset
examples = load_dataset('BennoKrojer/ImageCoDe')
```
And download `image_sets.zip` for all images sets (each directory consisting of 10 images).
## Dataset Description
- **Homepage & Leaderboard:** https://mcgill-nlp.github.io/imagecode/
- **Repository:** https://github.com/McGill-NLP/imagecode
- **Paper:** https://arxiv.org/abs/2203.15867
- **Point of Contact:** benno DOT krojer ÄT gmail DOT com
### Dataset Summary
We introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.
## Dataset Structure
### Data Instances
An instance contains a description, the corresponding image set name, and the target index:
```
{"image_set": "video-storytelling-videowedding_de8dLXvgV-I-shot6_0",
"image_index": "8",
"description": "The flowers the woman in the teal strapless dress is carrying are completely obscured by the man in the black shirt's head. "}
```
### Data Splits
| Dataset Split | Number of Descriptions in Split |
| ------------- |----------------------------- |
| Train | 16,594 |
| Validation | 2,302 |
| Test | 2,306 |
## Dataset Creation
### Curation Rationale
The main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics. | [
-0.49022912979125977,
-0.44283777475357056,
0.08061131834983826,
0.3040899336338043,
-0.5173161029815674,
-0.1840566247701645,
-0.38973328471183777,
-0.5273292660713196,
-0.00526672787964344,
0.5506930947303772,
-0.376693457365036,
-0.7983500957489014,
-0.5722777247428894,
0.13546213507652... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sagot/lefff_morpho | sagot | 2022-07-23T15:52:46Z | 108 | 0 | null | [
"license:lgpl-lr",
"region:us"
] | 2022-07-23T15:52:46Z | 2022-06-12T19:19:49.000Z | 2022-06-12T19:19:49 | ---
license: lgpl-lr
---
# Dataset Card for lefff morpho
## Dataset Description
- **Homepage:** [http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html)
- **Repository:** [https://gitlab.inria.fr/almanach/alexina/lefff](https://gitlab.inria.fr/almanach/alexina/lefff)
- **Paper:** [http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf](http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf)
- **Point of Contact:** [Benoît Sagot](benoit.sagot@inria.fr)
### Dataset Summary
The Lefff, currently in its 3.5 version, is one of the main morphological and syntactic lexicons for French. This Hugging Face dataset provides an easy access to the extensional morphological information in the Lefff, i.e. to the 4-uples (form, lemma, category, morphosyntactic features) and to the amalgams (e.g. _aux_ = _à_ + _les_) it contains. Category and morphosyntactic features are provided both in the original Lefff format and following the UniMorph guidelines.
### Languages
French
## Dataset Creation
The main author of the resource is Benoît Sagot (Inria, France).
Please refer to the main paper and other Lefff-related papers for details.
## Additional Information
### Licensing Information
The dataset, as the whole Lefff, is distributed under the LGPL-LR licence.
### Citation Information
The main paper regarding the Lefff can be found [here](https://aclanthology.org/L10-1487/). Here is the BibTeX entry for the paper:
```
@inproceedings{sagot:inria-00521242,
TITLE = {{The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French}},
AUTHOR = {Sagot, Beno{\^i}t},
URL = {https://hal.inria.fr/inria-00521242},
BOOKTITLE = {{7th international conference on Language Resources and Evaluation (LREC 2010)}},
ADDRESS = {Valletta, Malta},
YEAR = {2010},
MONTH = May,
PDF = {https://hal.inria.fr/inria-00521242/file/lrec10lefff.pdf},
HAL_ID = {inria-00521242},
HAL_VERSION = {v1},
}
```
For specific parts of speech or other parts of the lexicon, please cite the corresponding papers whenever relevant.
| [
-0.5107287764549255,
-0.4811716675758362,
0.16972561180591583,
0.23136380314826965,
-0.40689435601234436,
-0.4383668601512909,
0.01563902571797371,
-0.413093239068985,
0.5921201705932617,
0.5604336857795715,
-0.6457216739654541,
-0.5888851881027222,
-0.5071545243263245,
0.3501257300376892,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jordane95/msmarco-passage-with-query | jordane95 | 2023-03-01T09:13:35Z | 108 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2023-03-01T09:13:35Z | 2022-07-26T08:25:24.000Z | 2022-07-26T08:25:24 | ---
license: afl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/csabstruct | allenai | 2022-11-02T17:54:38Z | 108 | 3 | null | [
"license:apache-2.0",
"arxiv:1909.04054",
"region:us"
] | 2022-11-02T17:54:38Z | 2022-11-02T17:15:53.000Z | 2022-11-02T17:15:53 | ---
license: apache-2.0
---
# CSAbstruct
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ± std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ± 1.99 |
| Sentence length in words | 21.8 ± 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://arxiv.org/abs/1909.04054
[2]: https://aclanthology.org/D19-1383
[3]: https://github.com/Franck-Dernoncourt/pubmed-rct
[4]: https://aclanthology.org/N18-3011/
[5]: https://www.figure-eight.com/
[6]: https://github.com/allenai/sequential_sentence_classification
| [
-0.13634133338928223,
-0.49286168813705444,
0.48875364661216736,
0.32775238156318665,
-0.1182292252779007,
0.08746952563524246,
-0.33201053738594055,
-0.38772091269493103,
0.1699417680501938,
0.37507107853889465,
-0.36007800698280334,
-0.7854762673377991,
-0.6725773215293884,
0.48659208416... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Linkseed/hacker_news_with_comments | Linkseed | 2023-01-06T05:44:10Z | 108 | 3 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:afl-3.0",
"CommentGenerate",
"region:us"
] | 2023-01-06T05:44:10Z | 2023-01-04T06:19:34.000Z | 2023-01-04T06:19:34 | ---
annotations_creators: []
language:
- en
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: 'hacker_news_with_comments '
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- CommentGenerate
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.
### Supported Tasks and Leaderboards
Comment Generation; News analysis with comments; Other comment-based NLP tasks.
### Languages
English
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.4450254738330841,
-0.6070502400398254,
0.17034846544265747,
0.3482944071292877,
-0.24773073196411133,
0.2973489463329315,
-0.4535415768623352,
-0.38158118724823,
0.681170642375946,
0.6568627953529358,
-0.8583935499191284,
-1.1967769861221313,
-0.7108033299446106,
0.0969822108745575,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
orkg/SciQA | orkg | 2023-05-22T10:13:44Z | 108 | 3 | null | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"annotations_creators:auto-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"knowledge-base-qa... | 2023-05-22T10:13:44Z | 2023-03-17T09:55:39.000Z | 2023-03-17T09:55:39 | ---
annotations_creators:
- expert-generated
- auto-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- knowledge-base-qa
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for SciQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SciQA Homepage]()
- **Repository:** [SciQA Repository](https://zenodo.org/record/7744048)
- **Paper:** The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge
- **Point of Contact:** [Yaser Jaradeh](mailto:Yaser.Jaradeh@tib.eu)
### Dataset Summary
SciQA contains 2,565 SPARQL query - question pairs along with answers fetched from the open research knowledge graph (ORKG) via a Virtuoso SPARQL endpoint, it is a collection of both handcrafted and autogenerated questions and queries. The dataset is split into 70% training, 10% validation and 20% test examples.
## Dataset Structure
### Data Instances
An example of a question is given below:
```json
{
"id": "AQ2251",
"query_type": "Factoid",
"question": {
"string": "Provide a list of papers that have utilized the Depth DDPPO model and include the links to their code?"
},
"paraphrased_question": [],
"query": {
"sparql": "SELECT DISTINCT ?code\nWHERE {\n ?model a orkgc:Model;\n rdfs:label ?model_lbl.\n FILTER (str(?model_lbl) = \"Depth DDPPO\")\n ?benchmark orkgp:HAS_DATASET ?dataset.\n ?cont orkgp:HAS_BENCHMARK ?benchmark.\n ?cont orkgp:HAS_MODEL ?model;\n orkgp:HAS_SOURCE_CODE ?code.\n}"
},
"template_id": "T07",
"auto_generated": true,
"query_shape": "Tree",
"query_class": "WHICH-WHAT",
"number_of_patterns": 4,
}
```
### Data Fields
- `id`: the id of the question
- `question`: a string containing the question
- `paraphrased_question`: a set of paraphrased versions of the question
- `query`: a SPARQL query that answers the question
- `query_type`: the type of the query
- `query_template`: an optional template of the query
- `query_shape`: a string indicating the shape of the query
- `query_class`: a string indicating the class of the query
- `auto_generated`: a boolean indicating whether the question is auto-generated or not
- `number_of_patterns`: an integer number indicating the number of gtaph patterns in the query
### Data Splits
The dataset is split into 70% training, 10% validation and 20% test questions.
## Additional Information
### Licensing Information
SciQA is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bibtex
@Article{SciQA2023,
author={Auer, S{\"o}ren
and Barone, Dante A. C.
and Bartz, Cassiano
and Cortes, Eduardo G.
and Jaradeh, Mohamad Yaser
and Karras, Oliver
and Koubarakis, Manolis
and Mouromtsev, Dmitry
and Pliukhin, Dmitrii
and Radyush, Daniil
and Shilin, Ivan
and Stocker, Markus
and Tsalapati, Eleni},
title={The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge},
journal={Scientific Reports},
year={2023},
month={May},
day={04},
volume={13},
number={1},
pages={7240},
abstract={Knowledge graphs have gained increasing popularity in the last decade in science and technology. However, knowledge graphs are currently relatively simple to moderate semantic structures that are mainly a collection of factual statements. Question answering (QA) benchmarks and systems were so far mainly geared towards encyclopedic knowledge graphs such as DBpedia and Wikidata. We present SciQA a scientific QA benchmark for scholarly knowledge. The benchmark leverages the Open Research Knowledge Graph (ORKG) which includes almost 170,000 resources describing research contributions of almost 15,000 scholarly articles from 709 research fields. Following a bottom-up methodology, we first manually developed a set of 100 complex questions that can be answered using this knowledge graph. Furthermore, we devised eight question templates with which we automatically generated further 2465 questions, that can also be answered with the ORKG. The questions cover a range of research fields and question types and are translated into corresponding SPARQL queries over the ORKG. Based on two preliminary evaluations, we show that the resulting SciQA benchmark represents a challenging task for next-generation QA systems. This task is part of the open competitions at the 22nd International Semantic Web Conference 2023 as the Scholarly Question Answering over Linked Data (QALD) Challenge.},
issn={2045-2322},
doi={10.1038/s41598-023-33607-z},
url={https://doi.org/10.1038/s41598-023-33607-z}
}
```
### Contributions
Thanks to [@YaserJaradeh](https://github.com/YaserJaradeh) for adding this dataset. | [
-0.3858511745929718,
-0.5372563004493713,
0.3144291341304779,
-0.0010038031032308936,
-0.045738380402326584,
0.11877600848674774,
0.3165912330150604,
-0.2557389438152313,
0.1890321671962738,
0.3735135793685913,
-0.6721643209457397,
-0.8211470246315002,
-0.34128338098526,
0.3346547484397888... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
azhx/counterfact | azhx | 2023-04-07T21:22:57Z | 108 | 0 | null | [
"region:us"
] | 2023-04-07T21:22:57Z | 2023-04-07T21:18:02.000Z | 2023-04-07T21:18:02 | ---
dataset_info:
features:
- name: case_id
dtype: int64
- name: pararel_idx
dtype: int64
- name: requested_rewrite
struct:
- name: prompt
dtype: string
- name: relation_id
dtype: string
- name: subject
dtype: string
- name: target_new
struct:
- name: id
dtype: string
- name: str
dtype: string
- name: target_true
struct:
- name: id
dtype: string
- name: str
dtype: string
- name: paraphrase_prompts
sequence: string
- name: neighborhood_prompts
sequence: string
- name: attribute_prompts
sequence: string
- name: generation_prompts
sequence: string
splits:
- name: train
num_bytes: 29388723
num_examples: 19728
- name: test
num_bytes: 3268668
num_examples: 2191
download_size: 12387190
dataset_size: 32657391
---
# Dataset Card for "counterfact"
Dataset from [ROME](https://rome.baulab.info/) by Meng et al.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7234761118888855,
-0.5250997543334961,
0.49867796897888184,
0.11798069626092911,
-0.2142942249774933,
-0.34468963742256165,
0.5403725504875183,
-0.3115007281303406,
0.6801092624664307,
0.40104278922080994,
-0.9241460561752319,
-0.45675820112228394,
-0.42390379309654236,
-0.2940788567066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
center-for-humans-and-machines/style-diffusion | center-for-humans-and-machines | 2023-06-30T17:45:02Z | 108 | 0 | null | [
"region:us"
] | 2023-06-30T17:45:02Z | 2023-05-16T11:27:45.000Z | 2023-05-16T11:27:45 | ---
dataset_info:
features:
- name: vectorId
dtype: string
- name: medianYear
dtype: int32
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 3448928
num_examples: 1113
download_size: 0
dataset_size: 3448928
---
# Dataset Card for "style-diffusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.668200671672821,
-0.6735074520111084,
0.2729896008968353,
0.5028841495513916,
-0.05928656831383705,
0.08142135292291641,
0.22840102016925812,
-0.0956682339310646,
1.0463929176330566,
0.3290972113609314,
-0.8248421549797058,
-0.7879114747047424,
-0.6529308557510376,
-0.4969281554222107,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asapp/slue-phase-2 | asapp | 2023-08-01T16:05:43Z | 108 | 4 | null | [
"arxiv:2212.10525",
"region:us"
] | 2023-08-01T16:05:43Z | 2023-05-31T04:10:08.000Z | 2023-05-31T04:10:08 |
### Dataset description
**(Jul. 11 2023) Detail information will released soon.**
- **Toolkit Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/abs/2212.10525](https://arxiv.org/abs/2212.10525)
### Licensing Information
#### SLUE-HVB
SLUE-HVB dataset contains a subset of the Gridspace-Stanford Harper Valley speech dataset and the copyright of this subset remains the same with the original license, CC-BY-4.0. See also original license notice (https://github.com/cricketclub/gridspace-stanford-harper-valley/blob/master/LICENSE)
Additionally, we provide dialog act classification annotation and it is covered with the same license as CC-BY-4.0.
#### SLUE-SQA-5
SLUE-SQA-5 Dataset contains question texts and answer strings (question_text, normalized_question_text, and answer_spans column in .tsv files) from these datasets,
* SQuAD1.1 (for questions whose question_id starts with ‘squad-’)
* Natural Questions (for questions whose question_id starts with ‘nq-’)
* WebQuestions (for questions whose question_id starts with ‘wq-’)
* CuratedTREC (for questions whose question_id starts with ‘trec-’)
* TriviaQA (for questions whose question_id starts with ‘triviaqa-’)
Additionally, we provide audio recordings (.wav files in “question” directories) of these questions.
For questions from TriviaQA (questions whose question_id starts with ‘triviaqa-’), their question texts, answer strings, and audio recordings are licensed with the same Apache License 2.0 as TriviaQA (for more detail, please refer to https://github.com/mandarjoshi90/triviaqa/blob/master/LICENSE).
For questions from the other 4 datasets, their question texts, answer strings, and audio recordings are licensed with Creative Commons Attribution-ShareAlike 4.0 International license.
SLUE-SQA-5 also contains a subset of Spoken Wikipedia, including the audios placed in “document” directories and their transcripts (document_text and normalized_document_text column in .tsv files). Additionally, we provide the text-to-speech alignments (.txt files in “word2time” directories).These contents are licensed with the same Creative Commons (CC BY-SA 4.0) license as Spoken Wikipedia.
#### SLUE-TED
SLUE-TED Dataset contains TED Talk audios along with the associated abstracts and title, which were concatenated to create reference summaries. This corpus is licensed with the same Creative Commons (CC BY–NC–ND 4.0 International) license as TED talks. For further information, please refer to the details provided below.
=============================
TED.com
We encourage you to share TED Talks, under our Creative Commons license, or ( CC BY–NC–ND 4.0 International, which means it may be shared under the conditions below:
CC: means the type of license rights associated with TED Talks, or Creative Commons
BY: means the requirement to include an attribution to TED as the owner of the TED Talk and include a link to the talk, but do not include any other TED branding on your website or platform, or language that may imply an endorsement.
NC: means you cannot use TED Talks in any commercial context or to gain any type of revenue, payment or fee from the license sublicense, access or usage of TED Talks in an app of any kind for any advertising, or in exchange for payment of any kind, including in any ad supported content or format.
ND: means that no derivative works are permitted so you cannot edit, remix, create, modify or alter the form of the TED Talks in any way. This includes using the TED Talks as the basis for another work, including dubbing, voice-overs, or other translations not authorized by TED. You may not add any more restrictions that we have placed on the TED site content, such as additional legal or technological restrictions on accessing the content.
| [
-0.5830408930778503,
-0.7150639891624451,
0.30779096484184265,
0.0873766541481018,
-0.1359289139509201,
0.2454410195350647,
-0.21719498932361603,
-0.6490463018417358,
0.44893330335617065,
0.4025323688983917,
-0.8373618125915527,
-0.6671264171600342,
-0.24247878789901733,
-0.011035947129130... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathancui/oxford-pets | jonathancui | 2023-08-01T02:55:57Z | 108 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-08-01T02:55:57Z | 2023-08-01T02:49:22.000Z | 2023-08-01T02:49:22 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Abyssinian
'1': Bengal
'2': Birman
'3': Bombay
'4': British_Shorthair
'5': Egyptian_Mau
'6': Maine_Coon
'7': Persian
'8': Ragdoll
'9': Russian_Blue
'10': Siamese
'11': Sphynx
'12': american_bulldog
'13': american_pit_bull_terrier
'14': basset_hound
'15': beagle
'16': boxer
'17': chihuahua
'18': english_cocker_spaniel
'19': english_setter
'20': german_shorthaired
'21': great_pyrenees
'22': havanese
'23': japanese_chin
'24': keeshond
'25': leonberger
'26': miniature_pinscher
'27': newfoundland
'28': pomeranian
'29': pug
'30': saint_bernard
'31': samoyed
'32': scottish_terrier
'33': shiba_inu
'34': staffordshire_bull_terrier
'35': wheaten_terrier
'36': yorkshire_terrier
splits:
- name: train
num_bytes: 378015144.64
num_examples: 3680
- name: test
num_bytes: 412951221.999
num_examples: 3669
download_size: 790031129
dataset_size: 790966366.6389999
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gonced8/multi-session_chat | gonced8 | 2023-08-25T10:59:38Z | 108 | 2 | null | [
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:gpl-3.0",
"region:us"
] | 2023-08-25T10:59:38Z | 2023-08-25T10:56:33.000Z | 2023-08-25T10:56:33 | ---
license: gpl-3.0
task_categories:
- conversational
language:
- en
pretty_name: Multi-Session Chat
size_categories:
- 100K<n<1M
---
Not my dataset, I only cleaned the dataset from [ParlAI - Msc](https://parl.ai/projects/msc/). | [
-0.3080507516860962,
-0.3757933974266052,
0.19991442561149597,
-0.13584376871585846,
-0.12158896028995514,
0.28922441601753235,
0.17723348736763,
0.28416892886161804,
0.5686337947845459,
0.9341280460357666,
-0.6473384499549866,
-0.7953563332557678,
-0.3340257406234741,
0.09556035697460175,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ds4sd/SynthTabNet_OTSL | ds4sd | 2023-08-31T17:14:02Z | 108 | 1 | null | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | 2023-08-31T17:14:02Z | 2023-08-31T16:07:02.000Z | 2023-08-31T16:07:02 | ---
license: other
pretty_name: SynthTabNet-OTSL
size_categories:
- 10K<n<100K
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for SynthTabNet_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset is a conversion of the original [SynthTabNet](https://github.com/IBM/SynthTabNet) into the OTSL format presented in our paper "Optimized Table Tokenization for Table Structure Recognition". The dataset includes the original annotations amongst new additions.
SynthTabNet is organized into 4 parts of 150k tables (600k in total). Each part contains tables with different appearances in regard to their size, structure, style and content. All parts are divided into Train, Test and Val splits.
| Appearance style | Records |
|------------------|---------|
| Fintabnet | 150k |
| Marketing | 150k |
| PubTabNet | 150k |
| Sparse | 150k |
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* otsl: new reduced table structure token format
* html: original dataset groundtruth HTML (structure).
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
```bib
@misc{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}```
| [
-0.3719627857208252,
-0.3971422016620636,
0.39279547333717346,
-0.006500875111669302,
-0.538472056388855,
-0.05051061883568764,
0.024575090035796165,
-0.3642318844795227,
0.5841729640960693,
0.23452214896678925,
-0.3801017701625824,
-0.9082655310630798,
-0.12375261634588242,
0.309562236070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
advancedcv/Food500Cap | advancedcv | 2023-10-19T02:01:16Z | 108 | 0 | null | [
"region:us"
] | 2023-10-19T02:01:16Z | 2023-10-19T01:25:20.000Z | 2023-10-19T01:25:20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: cat
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3004559279.747
num_examples: 19877
- name: test
num_bytes: 601407879.384
num_examples: 4938
download_size: 3000710601
dataset_size: 3605967159.131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "caps_data_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.428194135427475,
-0.31581878662109375,
0.15265892446041107,
0.29607871174812317,
-0.10885544866323471,
0.12004929035902023,
0.27408790588378906,
-0.21326960623264313,
0.842714786529541,
0.5049811005592346,
-0.8169654011726379,
-0.6480903625488281,
-0.8615956902503967,
-0.354502737522125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Empolyon2/PokemonDataset | Empolyon2 | 2023-11-21T05:25:11Z | 108 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"text",
"image",
"region:us"
] | 2023-11-21T05:25:11Z | 2023-11-16T20:59:40.000Z | 2023-11-16T20:59:40 | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- text
- image
size_categories:
- 1K<n<10K
pretty_name: PokemonDataset
---
---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for Pokemon Gen 1
## Dataset Description
- **Short Description:** This dataset comprises images along with corresponding textual prompts. It contains 149 subfolders, each representing a unique category, with multiple images. Each category is associated with specific prompts, as detailed in an accompanying Excel sheet.
- **Purpose:** The dataset is designed for training models that can understand and generate Pokemon images based on textual prompts.
- **Data Collection and Processing:** Images were sourced from [source of images]. Textual prompts were created to accurately describe or relate to the images. Images were processed for resizing, removing bad data, normalization, augmentation, and enhancement.
## Dataset Structure
- **Data Instances:** A typical data instance consists of a textual prompt and a corresponding image path.
- **Data Fields:**
- `prompt`: A string containing the textual description or cue associated with the image.
- `image_file`: The path to the image file related to the prompt.
- **Data Splits:** The dataset is not explicitly split. All instances are part of a single batch. Users can create training, validation, and test splits as needed.
## Dataset Creation
- **Creators:** This dataset was created by Kerem Topalismailoglu.
- **Motivation:** APS360.
## Additional Information
- **Curation Rationale:** The dataset was curated to cover a diverse range of images and corresponding descriptive prompts.
- **Source Data:** The images were sourced from [source], ensuring a wide variety of visual content.
- **Annotations:** The dataset does not include additional annotations beyond the image-prompt pairs.
## Usage
- **Using the Dataset with Hugging Face:**
```python
from datasets import load_dataset
dataset = load_dataset("path_to_my_dataset")
```
## Dataset Card Creation
- **Who Created the Dataset Card:** [Your Name/Organization]
## Citation
- **Citations:** [Include any relevant citations for the dataset or sources of the images.] | [
-0.6224988698959351,
-0.4358091652393341,
0.11127515882253647,
0.34130382537841797,
-0.3617318868637085,
-0.32084617018699646,
-0.05523310974240303,
-0.36028867959976196,
0.6941421627998352,
0.37163394689559937,
-1.0301672220230103,
-0.7403709888458252,
-0.5437121987342834,
0.2983253300189... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/recast | metaeval | 2023-06-02T14:40:17Z | 107 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"nli",
"natural-lan... | 2023-06-02T14:40:17Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'recast_nli'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- nli
- natural-language-inference
---
http://decomp.io/ | [
-0.49288272857666016,
-0.678532600402832,
0.6035715937614441,
0.08044896274805069,
-0.6187820434570312,
0.21800199151039124,
0.2787841558456421,
-0.2099856287240982,
0.721234917640686,
0.6405073404312134,
-0.93291836977005,
-0.7481644153594971,
-0.10006964951753616,
-0.07529575377702713,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
olm/wikipedia | olm | 2023-11-16T11:28:22Z | 107 | 24 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categ... | 2023-11-16T11:28:22Z | 2022-10-04T18:07:56.000Z | 2022-10-04T18:07:56 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- aa
- ab
- ace
- af
- ak
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- atj
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bi
- bjn
- bm
- bn
- bo
- bpy
- br
- bs
- bug
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- ch
- cho
- chr
- chy
- ckb
- co
- cr
- crh
- cs
- csb
- cu
- cv
- cy
- da
- de
- din
- diq
- dsb
- dty
- dv
- dz
- ee
- el
- eml
- en
- eo
- es
- et
- eu
- ext
- fa
- ff
- fi
- fj
- fo
- fr
- frp
- frr
- fur
- fy
- ga
- gag
- gan
- gd
- gl
- glk
- gn
- gom
- gor
- got
- gu
- gv
- ha
- hak
- haw
- he
- hi
- hif
- ho
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ig
- ii
- ik
- ilo
- inh
- io
- is
- it
- iu
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kbp
- kg
- ki
- kj
- kk
- kl
- km
- kn
- ko
- koi
- krc
- ks
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lbe
- lez
- lfn
- lg
- li
- lij
- lmo
- ln
- lo
- lrc
- lt
- ltg
- lv
- lzh
- mai
- mdf
- mg
- mh
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mus
- mwl
- my
- myv
- mzn
- na
- nah
- nan
- nap
- nds
- ne
- new
- ng
- nl
- nn
- 'no'
- nov
- nrf
- nso
- nv
- ny
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pdc
- pfl
- pi
- pih
- pl
- pms
- pnb
- pnt
- ps
- pt
- qu
- rm
- rmy
- rn
- ro
- ru
- rue
- rup
- rw
- sa
- sah
- sat
- sc
- scn
- sco
- sd
- se
- sg
- sgs
- sh
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- srn
- ss
- st
- stq
- su
- sv
- sw
- szl
- ta
- tcy
- tdt
- te
- tg
- th
- ti
- tk
- tl
- tn
- to
- tpi
- tr
- ts
- tt
- tum
- tw
- ty
- tyv
- udm
- ug
- uk
- ur
- uz
- ve
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xmf
- yi
- yo
- yue
- za
- zea
- zh
- zu
language_bcp47:
- nds-nl
config_names:
- 20220301.aa
- 20220301.ab
- 20220301.ace
- 20220301.ady
- 20220301.af
- 20220301.ak
- 20220301.als
- 20220301.am
- 20220301.an
- 20220301.ang
- 20220301.ar
- 20220301.arc
- 20220301.arz
- 20220301.as
- 20220301.ast
- 20220301.atj
- 20220301.av
- 20220301.ay
- 20220301.az
- 20220301.azb
- 20220301.ba
- 20220301.bar
- 20220301.bat-smg
- 20220301.bcl
- 20220301.be
- 20220301.be-x-old
- 20220301.bg
- 20220301.bh
- 20220301.bi
- 20220301.bjn
- 20220301.bm
- 20220301.bn
- 20220301.bo
- 20220301.bpy
- 20220301.br
- 20220301.bs
- 20220301.bug
- 20220301.bxr
- 20220301.ca
- 20220301.cbk-zam
- 20220301.cdo
- 20220301.ce
- 20220301.ceb
- 20220301.ch
- 20220301.cho
- 20220301.chr
- 20220301.chy
- 20220301.ckb
- 20220301.co
- 20220301.cr
- 20220301.crh
- 20220301.cs
- 20220301.csb
- 20220301.cu
- 20220301.cv
- 20220301.cy
- 20220301.da
- 20220301.de
- 20220301.din
- 20220301.diq
- 20220301.dsb
- 20220301.dty
- 20220301.dv
- 20220301.dz
- 20220301.ee
- 20220301.el
- 20220301.eml
- 20220301.en
- 20220301.eo
- 20220301.es
- 20220301.et
- 20220301.eu
- 20220301.ext
- 20220301.fa
- 20220301.ff
- 20220301.fi
- 20220301.fiu-vro
- 20220301.fj
- 20220301.fo
- 20220301.fr
- 20220301.frp
- 20220301.frr
- 20220301.fur
- 20220301.fy
- 20220301.ga
- 20220301.gag
- 20220301.gan
- 20220301.gd
- 20220301.gl
- 20220301.glk
- 20220301.gn
- 20220301.gom
- 20220301.gor
- 20220301.got
- 20220301.gu
- 20220301.gv
- 20220301.ha
- 20220301.hak
- 20220301.haw
- 20220301.he
- 20220301.hi
- 20220301.hif
- 20220301.ho
- 20220301.hr
- 20220301.hsb
- 20220301.ht
- 20220301.hu
- 20220301.hy
- 20220301.ia
- 20220301.id
- 20220301.ie
- 20220301.ig
- 20220301.ii
- 20220301.ik
- 20220301.ilo
- 20220301.inh
- 20220301.io
- 20220301.is
- 20220301.it
- 20220301.iu
- 20220301.ja
- 20220301.jam
- 20220301.jbo
- 20220301.jv
- 20220301.ka
- 20220301.kaa
- 20220301.kab
- 20220301.kbd
- 20220301.kbp
- 20220301.kg
- 20220301.ki
- 20220301.kj
- 20220301.kk
- 20220301.kl
- 20220301.km
- 20220301.kn
- 20220301.ko
- 20220301.koi
- 20220301.krc
- 20220301.ks
- 20220301.ksh
- 20220301.ku
- 20220301.kv
- 20220301.kw
- 20220301.ky
- 20220301.la
- 20220301.lad
- 20220301.lb
- 20220301.lbe
- 20220301.lez
- 20220301.lfn
- 20220301.lg
- 20220301.li
- 20220301.lij
- 20220301.lmo
- 20220301.ln
- 20220301.lo
- 20220301.lrc
- 20220301.lt
- 20220301.ltg
- 20220301.lv
- 20220301.mai
- 20220301.map-bms
- 20220301.mdf
- 20220301.mg
- 20220301.mh
- 20220301.mhr
- 20220301.mi
- 20220301.min
- 20220301.mk
- 20220301.ml
- 20220301.mn
- 20220301.mr
- 20220301.mrj
- 20220301.ms
- 20220301.mt
- 20220301.mus
- 20220301.mwl
- 20220301.my
- 20220301.myv
- 20220301.mzn
- 20220301.na
- 20220301.nah
- 20220301.nap
- 20220301.nds
- 20220301.nds-nl
- 20220301.ne
- 20220301.new
- 20220301.ng
- 20220301.nl
- 20220301.nn
- 20220301.no
- 20220301.nov
- 20220301.nrm
- 20220301.nso
- 20220301.nv
- 20220301.ny
- 20220301.oc
- 20220301.olo
- 20220301.om
- 20220301.or
- 20220301.os
- 20220301.pa
- 20220301.pag
- 20220301.pam
- 20220301.pap
- 20220301.pcd
- 20220301.pdc
- 20220301.pfl
- 20220301.pi
- 20220301.pih
- 20220301.pl
- 20220301.pms
- 20220301.pnb
- 20220301.pnt
- 20220301.ps
- 20220301.pt
- 20220301.qu
- 20220301.rm
- 20220301.rmy
- 20220301.rn
- 20220301.ro
- 20220301.roa-rup
- 20220301.roa-tara
- 20220301.ru
- 20220301.rue
- 20220301.rw
- 20220301.sa
- 20220301.sah
- 20220301.sat
- 20220301.sc
- 20220301.scn
- 20220301.sco
- 20220301.sd
- 20220301.se
- 20220301.sg
- 20220301.sh
- 20220301.si
- 20220301.simple
- 20220301.sk
- 20220301.sl
- 20220301.sm
- 20220301.sn
- 20220301.so
- 20220301.sq
- 20220301.sr
- 20220301.srn
- 20220301.ss
- 20220301.st
- 20220301.stq
- 20220301.su
- 20220301.sv
- 20220301.sw
- 20220301.szl
- 20220301.ta
- 20220301.tcy
- 20220301.te
- 20220301.tet
- 20220301.tg
- 20220301.th
- 20220301.ti
- 20220301.tk
- 20220301.tl
- 20220301.tn
- 20220301.to
- 20220301.tpi
- 20220301.tr
- 20220301.ts
- 20220301.tt
- 20220301.tum
- 20220301.tw
- 20220301.ty
- 20220301.tyv
- 20220301.udm
- 20220301.ug
- 20220301.uk
- 20220301.ur
- 20220301.uz
- 20220301.ve
- 20220301.vec
- 20220301.vep
- 20220301.vi
- 20220301.vls
- 20220301.vo
- 20220301.wa
- 20220301.war
- 20220301.wo
- 20220301.wuu
- 20220301.xal
- 20220301.xh
- 20220301.xmf
- 20220301.yi
- 20220301.yo
- 20220301.za
- 20220301.zea
- 20220301.zh
- 20220301.zh-classical
- 20220301.zh-min-nan
- 20220301.zh-yue
- 20220301.zu
---
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
The difference is that this fork does away with the need for `apache-beam`, and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
This fork is also used in the [OLM Project](https://github.com/huggingface/olm-datasets) to pull and process up-to-date wikipedia snapshots.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool, and we use ``multiprocess`` for parallelization.
To load this dataset you need to install these first:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("olm/wikipedia", language="en", date="20220920")
```
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
| [
-0.8343515396118164,
-0.6500068306922913,
0.14450746774673462,
0.14165765047073364,
-0.26406824588775635,
-0.27355891466140747,
-0.4074021279811859,
-0.47842302918434143,
0.5661401152610779,
0.33499258756637573,
-0.7434722185134888,
-0.7777888774871826,
-0.4692016541957855,
0.2279323935508... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1850000-1900000 | tomekkorbak | 2022-10-04T23:05:22Z | 107 | 0 | null | [
"region:us"
] | 2022-10-04T23:05:22Z | 2022-10-04T23:05:14.000Z | 2022-10-04T23:05:14 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1500000-1550000 | tomekkorbak | 2022-10-04T23:53:18Z | 107 | 0 | null | [
"region:us"
] | 2022-10-04T23:53:18Z | 2022-10-04T23:53:11.000Z | 2022-10-04T23:53:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/nfcorpus-pl-qrels | clarin-knext | 2023-06-07T08:10:48Z | 107 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:10:48Z | 2023-06-06T22:44:12.000Z | 2023-06-06T22:44:12 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920436143875,
-0.9029766917228699,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.4962919354438782,
-0.01896025240421295,
0.41122618317604065,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175127029419,
-0.048304717987775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jherng/rsna-2023-abdominal-trauma-detection | jherng | 2023-10-10T06:56:40Z | 107 | 2 | null | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | 2023-10-10T06:56:40Z | 2023-09-19T10:10:47.000Z | 2023-09-19T10:10:47 | ---
license: mit
dataset_info:
- config_name: classification
features:
- name: img_path
dtype: string
- name: bowel
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: extravasation
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: kidney
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: liver
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: spleen
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: any_injury
dtype: bool
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 802231
num_examples: 4239
- name: test
num_bytes: 89326
num_examples: 472
download_size: 96729254048
dataset_size: 891557
- config_name: classification-with-mask
features:
- name: img_path
dtype: string
- name: seg_path
dtype: string
- name: bowel
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: extravasation
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: kidney
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: liver
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: spleen
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: any_injury
dtype: bool
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 58138
num_examples: 185
- name: test
num_bytes: 6600
num_examples: 21
download_size: 4196738529
dataset_size: 64738
- config_name: segmentation
features:
- name: img_path
dtype: string
- name: seg_path
dtype: string
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 50714
num_examples: 185
- name: test
num_bytes: 5757
num_examples: 21
download_size: 4196631843
dataset_size: 56471
task_categories:
- image-classification
- image-segmentation
pretty_name: RSNA 2023 Abdominal Trauma Detection (Preprocessed)
size_categories:
- 1K<n<10K
---
# Dataset Card for RSNA 2023 Abdominal Trauma Detection (Preprocessed)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection](https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection)
- **Source:** [https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data)
### Dataset Summary
This dataset is the preprocessed version of the dataset from [RSNA 2023 Abdominal Trauma Detection Kaggle Competition](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data).
It is tailored for segmentation and classification tasks. It contains 3 different configs as described below:
- **classification**:
- 4711 instances where each instance includes a CT scan in NIfTI format, target labels, and its relevant metadata.
- **segmentation**:
- 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, and its relevant metadata.
- **classification-with-mask**:
- 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, target labels, and its relevant metadata.
All CT scans and segmentation masks had already been resampled with voxel spacing (2.0, 2.0, 3.0) and thus its reduced file size.
### Usage
```python
from datasets import load_dataset
# Classification dataset
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", streaming=True) # "classification" is the default configuration
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=False) # download dataset and cache locally (~90.09 GiB)
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=True, test_size=0.05, random_state=42) # specify split size for train-test split
# Classification dataset with segmentation masks
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=False) # download dataset and cache locally (~3.91 GiB)
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=False, test_size=0.05, random_state=42) # specify split size for train-test split
# Segmentation dataset
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=False) # download dataset and cache locally (~3.91 GiB)
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=True, test_size=0.1, random_state=42) # specify split size for train-test split
# Get the dataset splits
train_rsna_cls_ds = rsna_cls_ds["train"]; test_rsna_cls_ds = rsna_cls_ds["test"]
train_rsna_clsmask_ds = rsna_clsmask_ds["train"]; test_rsna_clsmask_ds = rsna_clsmask_ds["test"]
train_rsna_seg_ds = rsna_seg_ds["train"]; test_rsna_seg_ds = rsna_seg_ds["test"]
# Tip: Download speed up with multiprocessing
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", streaming=False, num_proc=8) # num_proc: num of cpu core used for loading the dataset
```
## Dataset Structure
### Data Instances
#### Configuration 1: classification
- **Size of downloaded dataset files:** 90.09 GiB
An example of an instance in the 'classification' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/25899/21872.nii.gz",
"bowel": 0,
"extravasation": 0,
"kidney": 0,
"liver": 0,
"spleen": 0,
"any_injury": false,
"metadata": {
"series_id": 21872,
"patient_id": 25899,
"incomplete_organ": false,
"aortic_hu": 113.0,
"pixel_representation": 0,
"bits_allocated": 16,
"bits_stored": 12
}
}
```
#### Configuration 2: segmentation
- **Size of downloaded dataset files:** 3.91 GiB
An example of an instance in the 'segmentation' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/4791/4622.nii.gz",
"seg_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/segmentations/4622.nii.gz",
"metadata": {
"series_id": 4622,
"patient_id": 4791,
"incomplete_organ": false,
"aortic_hu": 223.0,
"pixel_representation": 1,
"bits_allocated": 16,
"bits_stored": 16
}
}
```
#### Configuration 3: classification-with-mask
- **Size of downloaded dataset files:** 3.91 GiB
An example of an instance in the 'classification-with-mask' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/4791/4622.nii.gz",
"seg_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/segmentations/4622.nii.gz",
"bowel": 0,
"extravasation": 0,
"kidney": 0,
"liver": 1,
"spleen": 1,
"any_injury": true,
"metadata": {
"series_id": 4622,
"patient_id": 4791,
"incomplete_organ": false,
"aortic_hu": 223.0,
"pixel_representation": 1,
"bits_allocated": 16,
"bits_stored": 16
}
}
```
### Data Fields
The data fields for all configurations are as follows:
- `img_path`: a `string` feature representing the path to the CT scan in NIfTI format.
- `seg_path`: a `string` feature representing the path to the segmentation mask in NIfTI format (only for 'segmentation' and 'classification-with-mask' configurations).
- `bowel`, `extravasation`, `kidney`, `liver`, `spleen`: Class label features indicating the condition of respective organs.
- `any_injury`: a `bool` feature indicating the presence of any injury.
- `metadata`: a dictionary feature containing metadata information with the following fields:
- `series_id`: an `int32` feature.
- `patient_id`: an `int32` feature.
- `incomplete_organ`: a `bool` feature.
- `aortic_hu`: a `float32` feature.
- `pixel_representation`: an `int32` feature.
- `bits_allocated`: an `int32` feature.
- `bits_stored`: an `int32` feature.
### Data Splits
Default split:
- 0.9:0.1 with random_state = 42
| Configuration Name | Train (n_samples) | Test (n_samples) |
| ------------------------ | ----------------: | ---------------: |
| classification | 4239 | 472 |
| segmentation | 185 | 21 |
| classification-with-mask | 185 | 21 |
Modify the split proportion:
```python
rsna_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", test_size=0.05, random_state=42)
```
## Additional Information
### Citation Information
- Preprocessed dataset:
```
@InProceedings{huggingface:dataset,
title = {RSNA 2023 Abdominal Trauma Detection Dataset (Preprocessed)},
author={Hong Jia Herng},
year={2023}
}
```
- Original dataset:
```
@misc{rsna-2023-abdominal-trauma-detection,
author = {Errol Colak, Hui-Ming Lin, Robyn Ball, Melissa Davis, Adam Flanders, Sabeena Jalal, Kirti Magudia, Brett Marinelli, Savvas Nicolaou, Luciano Prevedello, Jeff Rudie, George Shih, Maryam Vazirabad, John Mongan},
title = {RSNA 2023 Abdominal Trauma Detection},
publisher = {Kaggle},
year = {2023},
url = {https://kaggle.com/competitions/rsna-2023-abdominal-trauma-detection}
}
```
| [
-0.5718640685081482,
-0.12582868337631226,
0.32089707255363464,
0.11359654366970062,
-0.7924335598945618,
-0.04310417175292969,
0.17563700675964355,
-0.49675267934799194,
0.566893994808197,
0.3339497745037079,
-0.5708956718444824,
-0.64910489320755,
-0.5500785112380981,
0.4286858141422272,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qgyd2021/chinese_chitchat | qgyd2021 | 2023-09-22T08:39:11Z | 107 | 11 | null | [
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"chitchat",
"region:us"
] | 2023-09-22T08:39:11Z | 2023-09-22T02:24:54.000Z | 2023-09-22T02:24:54 | ---
license: apache-2.0
language:
- zh
tags:
- chitchat
size_categories:
- 100M<n<1B
---
## 中文闲聊数据集
role 的取值有: "unknown", "human", "assistant", 三种.
数据集从网上收集整理如下:
| 数据 | 原始数据/项目地址 | 样本个数 | 语料描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: |
| ChatterBot | [ChatterBot](https://github.com/gunthercox/ChatterBot); [chatterbot-corpus](https://github.com/gunthercox/chatterbot-corpus) | 560 | 按类型分类,质量较高 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| douban | [Douban Conversation Corpus](https://github.com/MarkWuNLP/MultiTurnResponseSelection) | 352W | 来自北航和微软的paper, 噪音相对较少, 多轮(平均7.6轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| ptt | [PTT中文語料](https://github.com/zake7749/Gossiping-Chinese-Corpus) | 77W | 开源项目, 台湾PTT论坛八卦版, 繁体, 语料较生活化, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| qingyun | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 10W | 青云语料, 相对不错, 生活化 | |
| subtitle | [电视剧对白语料](https://github.com/aceimnorstuvwxz/dgk_lost_conv) | 274W | 来自爬取的电影和美剧的字幕, 有一些噪音, 不严谨的对话, 说话人无法对应起来, 多轮(平均5.3轮) | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| tieba | [贴吧论坛回帖语料](https://pan.baidu.com/s/1mUknfwy1nhSM7XzH8xi7gQ); 密码:i4si | 232W | 多轮, 有噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
| weibo | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao | 443W | 来自华为的paper | |
| xiaohuangji | [小黄鸡语料](https://github.com/candlewill/Dialog_Corpus) | 45W | 原人人网项目语料, 有一些不雅对话, 少量噪音 | [阿里云盘](https://www.aliyundrive.com/s/qXBdAYtz5j5); 提取码: 81ao |
<details>
<summary>参考的数据来源,展开查看</summary>
<pre>
<code>
https://github.com/codemayq/chinese_chatbot_corpus
https://github.com/yangjianxin1/GPT2-chitchat
</code>
</pre>
</details>
| [
-0.35187745094299316,
-0.8298754096031189,
0.2660330533981323,
0.5405294895172119,
-0.23786355555057526,
-0.03267274051904678,
-0.20820897817611694,
-0.3951980471611023,
0.7067930698394775,
0.3414386808872223,
-0.47030508518218994,
-0.48785343766212463,
-0.4678329825401306,
0.1822934150695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/Huatuo26M-GPTShine | FreedomIntelligence | 2023-10-16T07:16:30Z | 107 | 5 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | 2023-10-16T07:16:30Z | 2023-10-11T09:08:49.000Z | 2023-10-11T09:08:49 | ---
license: apache-2.0
task_categories:
- text-classification
- question-answering
- conversational
- text-generation
language:
- zh
tags:
- medical
pretty_name: Huatuo26M_v2
size_categories:
- 100K<n<1M
---
# Huatuo26M-GPTShine Dataset 📚
- ## Table of Contents 🗂
- [Dataset Description](#dataset-description) 📝
- [Dataset Information](#dataset-information) ℹ️
- [Data Distribution](#data-distribution) 📊
- [Usage](#usage) 🔧
- [Citation](#citation) 📖
## Dataset Description 📝
Huatuo26M-GPTShine is a refined and optimized dataset based on the Huatuo26M dataset, which has undergone multiple purification processes and rewrites. It has more data dimensions and higher data quality. We welcome you to try using it.
## Dataset Information ℹ️
- **Dataset Name:** Huatuo26M-GPTShine
- **Version:** _[0.0.1]_
- **Size:** _[178k]_
- **Language:** _[Chinese]_
### Abstract 📄
We collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods.
### Data Collection 🕵️♂️
ur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence.
### Preprocessing/Cleaning 🧹
The dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset.
## Data Distribution 📊
This section provides a visual overview of the distribution of data in the Huatuo26M-GPTShine dataset.
**Data Categories Bar Chart:** 
This chart represents the distribution of data categories in the dataset.
**Top 20 Associated Diseases Table:**
| topn | disease | nums | ratio |
| ---- | ---------- | ---- | ------- |
| 1 | 白癜风 | 3308 | 1.8615% |
| 2 | 人流 | 2686 | 1.5115% |
| 3 | 感冒 | 2371 | 1.3342% |
| 4 | 癫痫 | 2217 | 1.2476% |
| 5 | 痔疮 | 2134 | 1.2009% |
| 6 | 疼痛 | 1842 | 1.0366% |
| 7 | 咳嗽 | 1799 | 1.0124% |
| 8 | 前列腺炎 | 1564 | 0.8801% |
| 9 | 尖锐湿疣 | 1516 | 0.8531% |
| 10 | 肺癌 | 1408 | 0.7923% |
| 11 | 出血 | 1400 | 0.7878% |
| 12 | 鼻炎 | 1370 | 0.7709% |
| 13 | 肝癌 | 1354 | 0.7619% |
| 14 | 糖尿病 | 1348 | 0.7586% |
| 15 | 过敏性鼻炎 | 1295 | 0.7287% |
| 16 | 发烧 | 1265 | 0.7119% |
| 17 | 乙肝 | 1232 | 0.6933% |
| 18 | 便秘 | 1214 | 0.6832% |
| 19 | 甲亢 | 1178 | 0.6629% |
| 20 | 脱发 | 1173 | 0.6601% |
This table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions.
## Usage 🔧
```python
from datasets import load_dataset
dataset = load_dataset("FreedomIntelligence/Huatuo26M-GPTShine")
```
## Citation 📖
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
Please note that this dataset is distributed "AS IS" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions. | [
-0.29327842593193054,
-0.42599233984947205,
0.24033184349536896,
-0.0016239540418609977,
-0.5303438305854797,
-0.34459245204925537,
-0.007473668083548546,
-0.27509021759033203,
0.49405714869499207,
0.4508601129055023,
-0.21280455589294434,
-0.8521642088890076,
-0.6563125848770142,
0.166732... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eitanli/recipes | Eitanli | 2023-10-24T12:40:20Z | 107 | 0 | null | [
"region:us"
] | 2023-10-24T12:40:20Z | 2023-10-24T12:40:15.000Z | 2023-10-24T12:40:15 | ---
dataset_info:
features:
- name: recipe
dtype: string
splits:
- name: train
num_bytes: 105767040
num_examples: 74465
download_size: 53711472
dataset_size: 105767040
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4229365289211273,
-0.25794169306755066,
0.29578179121017456,
0.2921311557292938,
0.12394590675830841,
-0.051726385951042175,
0.2774331867694855,
-0.07092637568712234,
1.0288658142089844,
0.7654232978820801,
-0.9224423170089722,
-0.8470353484153748,
-0.6780380010604858,
-0.19773690402507... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zyxleo/cord_donut_multitask | zyxleo | 2023-11-07T04:44:58Z | 107 | 0 | null | [
"region:us"
] | 2023-11-07T04:44:58Z | 2023-11-06T19:32:25.000Z | 2023-11-06T19:32:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: task
dtype: string
- name: image_path
dtype: string
- name: ground_truth
dtype: string
- name: labels
sequence: int64
- name: input_ids
sequence: int64
splits:
- name: train
num_bytes: 1260759
num_examples: 800
- name: test
num_bytes: 93059
num_examples: 100
- name: validation
num_bytes: 86619
num_examples: 100
download_size: 299877
dataset_size: 1440437
---
# Dataset Card for "cord_donut_multitask"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3537217974662781,
-0.29676195979118347,
0.1661471128463745,
0.35585445165634155,
-0.07130936533212662,
0.4610642194747925,
0.0014042322291061282,
0.09284590184688568,
1.0136469602584839,
0.38925716280937195,
-0.9516124129295349,
-0.6182273030281067,
-0.5581470727920532,
-0.2517781257629... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luizlzg/prefeitura_rj | luizlzg | 2023-11-06T21:55:50Z | 107 | 0 | null | [
"region:us"
] | 2023-11-06T21:55:50Z | 2023-11-06T21:54:02.000Z | 2023-11-06T21:54:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: prefeitura_treino*
- split: test
path: prefeitura_teste*
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
en0c/km-shorts | en0c | 2023-11-11T17:07:10Z | 107 | 0 | null | [
"region:us"
] | 2023-11-11T17:07:10Z | 2023-11-11T17:06:45.000Z | 2023-11-11T17:06:45 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 173849408.0
num_examples: 45
download_size: 157813790
dataset_size: 173849408.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "km-shorts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7697494029998779,
-0.09820511192083359,
0.5523427128791809,
0.19253002107143402,
-0.6352612376213074,
0.1259443759918213,
0.21656839549541473,
-0.08821713179349899,
0.9326910376548767,
0.40758460760116577,
-0.8672237396240234,
-1.00394606590271,
-0.7588796019554138,
-0.2835003137588501,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nj1867/roof-images | nj1867 | 2023-11-23T10:26:02Z | 107 | 0 | roof-21 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100<n<500",
"source_datasets:extended|other-roof",
"language:en",
"license:unknown",
"region:u... | 2023-11-23T10:26:02Z | 2023-11-20T05:38:57.000Z | 2023-11-20T05:38:57 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100<n<500
source_datasets:
- extended|other-roof
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: roof-21
pretty_name: Roof-21
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': atlas_pinnacle_wwood_shade
'1': atlas_pinnacle_wwood_sun
'2': cteed_maxdef_wwood_shade
'3': cteed_maxdef_wwood_sun
'4': cteed_wwood_shade
'5': cteed_wwood_sun
'6': gaf_mission_brown_shade
'7': gaf_mission_brown_sun
'8': gaf_pewter_gray_shade
'9': gaf_pewter_gray_sun
'10': gaf_wwood_shade
'11': gaf_wwood_sun
'12': iko_cornerstone_shade
'13': iko_cornerstone_sun
'14': malarkey_wwood_shade
'15': malarkey_wwood_sun
'16': oc_driftwood_shade
'17': oc_driftwood_sun
'18': tamko_wwood_shade
'19': tamko_wwood_sun
splits:
- name: train
num_examples: 198
- name: validation
num_examples: 88
---
# Dataset Card for Food-101
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Roof-21 Dataset](https://huggingface.co/datasets/nj1867/roof-images)
- **Repository:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of 20 roof categories, with 286 images. For each class, 5-10 manually reviewed test images are provided as well as 15-20 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a roof into one of 20 classes.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x276021C5EB8>,
'label': 23
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"atlas_pinnacle_wwood_shade": 0,
"atlas_pinnacle_wwood_sun": 1,
"cteed_maxdef_wwood_shade": 2,
"cteed_maxdef_wwood_sun": 3,
"cteed_wwood_shade": 4,
"cteed_wwood_sun": 5,
"gaf_mission_brown_shade": 6,
"gaf_mission_brown_sun": 7,
"gaf_pewter_gray_shade": 8,
"gaf_pewter_gray_sun": 9,
"gaf_wwood_shade": 10,
"gaf_wwood_sun": 11,
"iko_cornerstone_shade": 12,
"iko_cornerstone_sun": 13,
"malarkey_wwood_shade": 14,
"malarkey_wwood_sun": 15,
"oc_driftwood_shade": 16,
"oc_driftwood_sun": 17,
"tamko_wwood_shade": 18,
"tamko_wwood_sun": 19,
}
```
</details>
### Data Splits
| |train|validation|
|----------|----:|---------:|
|# of examples|198|88|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
### Citation Information
### Contributions
| [
-0.5905083417892456,
-0.44239407777786255,
-0.13272418081760406,
0.24748466908931732,
-0.11021821200847626,
0.12868547439575195,
-0.19967582821846008,
-0.45002198219299316,
0.33879756927490234,
0.4224488139152527,
-0.7037125825881958,
-1.0640960931777954,
-0.6781069040298462,
0.24713613092... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MLCommons/ml_spoken_words | MLCommons | 2022-12-06T11:11:02Z | 106 | 16 | null | [
"task_categories:audio-classification",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:extended|common_voice",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"langu... | 2022-12-06T11:11:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ga
- gn
- ha
- ia
- id
- it
- ka
- ky
- lt
- lv
- mn
- mt
- nl
- or
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sk
- sl
- sv
- ta
- tr
- tt
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- extended|common_voice
task_categories:
- audio-classification
task_ids: []
pretty_name: Multilingual Spoken Words
language_bcp47:
- fy-NL
- ga-IE
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
tags:
- other-keyword-spotting
---
# Dataset Card for Multilingual Spoken Words
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/
- **Repository:** https://github.com/harvard-edge/multilingual_kws
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset.
Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like
`"{lang}_{format}"`, so to load, for example, Tatar in wav format do:
```python
ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav")
```
To download multiple languages in a single dataset pass list of languages to `languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
To download a specific format pass it to the `format` argument (default format is `wav`):
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus")
```
Note that each time you provide different sets of languages,
examples are generated from scratch even if you already provided one or several of them before
because custom configurations are created each time (the data is **not** redownloaded though).
### Supported Tasks and Leaderboards
Keyword spotting, Spoken term search
### Languages
The dataset is multilingual. To specify several languages to download pass a list of them to the
`languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
The dataset contains data for the following languages:
Low-resourced (<10 hours):
* Arabic (0.1G, 7.6h)
* Assamese (0.9M, 0.1h)
* Breton (69M, 5.6h)
* Chuvash (28M, 2.1h)
* Chinese (zh-CN) (42M, 3.1h)
* Dhivehi (0.7M, 0.04h)
* Frisian (0.1G, 9.6h)
* Georgian (20M, 1.4h)
* Guarani (0.7M, 1.3h)
* Greek (84M, 6.7h)
* Hakha Chin (26M, 0.1h)
* Hausa (90M, 1.0h)
* Interlingua (58M, 4.0h)
* Irish (38M, 3.2h)
* Latvian (51M, 4.2h)
* Lithuanian (21M, 0.46h)
* Maltese (88M, 7.3h)
* Oriya (0.7M, 0.1h)
* Romanian (59M, 4.5h)
* Sakha (42M, 3.3h)
* Slovenian (43M, 3.0h)
* Slovak (31M, 1.9h)
* Sursilvan (61M, 4.8h)
* Tamil (8.8M, 0.6h)
* Vallader (14M, 1.2h)
* Vietnamese (1.2M, 0.1h)
Medium-resourced (>10 & <100 hours):
* Czech (0.3G, 24h)
* Dutch (0.8G, 70h)
* Estonian (0.2G, 19h)
* Esperanto (1.3G, 77h)
* Indonesian (0.1G, 11h)
* Kyrgyz (0.1G, 12h)
* Mongolian (0.1G, 12h)
* Portuguese (0.7G, 58h)
* Swedish (0.1G, 12h)
* Tatar (4G, 30h)
* Turkish (1.3G, 29h)
* Ukrainian (0.2G, 18h)
Hig-resourced (>100 hours):
* Basque (1.7G, 118h)
* Catalan (8.7G, 615h)
* English (26G, 1957h)
* French (9.3G, 754h)
* German (14G, 1083h)
* Italian (2.2G, 155h)
* Kinyarwanda (6.1G, 422h)
* Persian (4.5G, 327h)
* Polish (1.8G, 130h)
* Russian (2.1G, 137h)
* Spanish (4.9G, 349h)
* Welsh (4.5G, 108h)
## Dataset Structure
### Data Instances
```python
{'file': 'абзар_common_voice_tt_17737010.opus',
'is_valid': True,
'language': 0,
'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38',
'gender': 0,
'keyword': 'абзар',
'audio': {'path': 'абзар_common_voice_tt_17737010.opus',
'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ...,
2.03458695e-34, 2.03458695e-34, 2.03458695e-34]),
'sampling_rate': 48000}}
```
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance. Makes sense only when providing multiple languages to the
dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data comes form Common Voice dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
he dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
### Citation Information
```
@inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
| [
-0.40233808755874634,
-0.5066111087799072,
-0.044707734137773514,
0.3544834852218628,
-0.17351099848747253,
-0.007888647727668285,
-0.6699638962745667,
-0.3998652696609497,
0.42231863737106323,
0.398506224155426,
-0.5914279818534851,
-1.0462454557418823,
-0.5703108310699463,
0.338646531105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mdroth/github-issues | mdroth | 2023-07-26T15:36:13Z | 106 | 0 | null | [
"region:us"
] | 2023-07-26T15:36:13Z | 2022-05-06T08:27:56.000Z | 2022-05-06T08:27:56 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 4103283
num_examples: 300
download_size: 866826
dataset_size: 4103283
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45796796679496765,
-0.2997816801071167,
0.18261587619781494,
0.22594556212425232,
-0.10234494507312775,
0.23124663531780243,
0.13614393770694733,
-0.12474127858877182,
1.0116897821426392,
0.3888835906982422,
-0.8210154175758362,
-0.6706061959266663,
-0.5103837251663208,
-0.2694440782070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ArthurBaia/squad_v1_pt_br | ArthurBaia | 2022-11-09T15:34:43Z | 106 | 3 | null | [
"region:us"
] | 2022-11-09T15:34:43Z | 2022-07-14T19:55:08.000Z | 2022-07-14T19:55:08 | This dataset was created by Deep Learning Brasil(www.deeplearningbrasil.com.br). I just published it on Hugging Face hub with the intention to share it with more people that are training brazilian portuguese models. The original link is here drive.google.com/file/d/1Q0IaIlv2h2BC468MwUFmUST0EyN7gNkn/view. | [
-0.6669689416885376,
-0.543367326259613,
0.07187741994857788,
0.5713232159614563,
-0.24306319653987885,
0.031828220933675766,
0.072578065097332,
-0.6571632027626038,
0.6215932369232178,
0.6369764804840088,
-0.97087162733078,
-0.5755593180656433,
-0.7344197630882263,
-0.12440282851457596,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1750000-1800000 | tomekkorbak | 2022-10-04T23:02:19Z | 106 | 0 | null | [
"region:us"
] | 2022-10-04T23:02:19Z | 2022-10-04T23:02:12.000Z | 2022-10-04T23:02:12 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1900000-1950000 | tomekkorbak | 2022-10-04T23:19:29Z | 106 | 0 | null | [
"region:us"
] | 2022-10-04T23:19:29Z | 2022-10-04T23:19:21.000Z | 2022-10-04T23:19:21 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1600000-1650000 | tomekkorbak | 2022-10-04T23:57:39Z | 106 | 0 | null | [
"region:us"
] | 2022-10-04T23:57:39Z | 2022-10-04T23:57:32.000Z | 2022-10-04T23:57:32 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1550000-1600000 | tomekkorbak | 2022-10-05T00:02:47Z | 106 | 0 | null | [
"region:us"
] | 2022-10-05T00:02:47Z | 2022-10-05T00:02:35.000Z | 2022-10-05T00:02:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fewshot-goes-multilingual/sk_csfd-movie-reviews | fewshot-goes-multilingual | 2022-12-18T21:30:31Z | 106 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sk",
"license:cc-by-sa-4.0",
"movie reviews",
"rat... | 2022-12-18T21:30:31Z | 2022-12-18T21:28:17.000Z | 2022-12-18T21:28:17 | ---
annotations_creators:
- crowdsourced
language:
- sk
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: CSFD movie reviews (Slovak)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie reviews
- rating prediction
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for CSFD movie reviews (Slovak)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "sk")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
| [
-0.5482544302940369,
-0.32480087876319885,
0.08579442650079727,
0.6790695786476135,
-0.8006359338760376,
0.0397627130150795,
-0.22609013319015503,
-0.024587517604231834,
0.29735544323921204,
0.7653712034225464,
-1.019317865371704,
-1.031762719154358,
-0.39648887515068054,
0.340640813112258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/furniture-ngpea | Francesco | 2023-03-30T09:12:40Z | 106 | 1 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:12:40Z | 2023-03-30T09:12:19.000Z | 2023-03-30T09:12:19 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': furniture
'1': Chair
'2': Sofa
'3': Table
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: furniture-ngpea
tags:
- rf100
---
# Dataset Card for furniture-ngpea
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/furniture-ngpea
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
furniture-ngpea
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/furniture-ngpea
### Citation Information
```
@misc{ furniture-ngpea,
title = { furniture ngpea Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/furniture-ngpea } },
url = { https://universe.roboflow.com/object-detection/furniture-ngpea },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.5690040588378906,
-0.6643070578575134,
0.15491122007369995,
-0.17031557857990265,
-0.40677207708358765,
-0.43490317463874817,
-0.08348111808300018,
-0.49658265709877014,
0.2995164394378662,
0.3760870397090912,
-0.6942642331123352,
-0.9728126525878906,
-0.3857669234275818,
0.182962149381... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BelleGroup/train_0.5M_CN | BelleGroup | 2023-04-03T08:11:22Z | 106 | 81 | null | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | 2023-04-03T08:11:22Z | 2023-03-31T10:17:49.000Z | 2023-03-31T10:17:49 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约50万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
| [
-0.2589781582355499,
-0.6286912560462952,
0.2762785851955414,
0.7829718589782715,
-0.3897298276424408,
-0.3835003077983856,
0.2976597845554352,
-0.20277883112430573,
0.4139735996723175,
0.592616856098175,
-0.8386463522911072,
-1.081526279449463,
-0.7754482626914978,
-0.06471142172813416,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alpayariyak/LLaVA_calculus_handwriting | alpayariyak | 2023-05-24T20:29:57Z | 106 | 3 | null | [
"region:us"
] | 2023-05-24T20:29:57Z | 2023-05-24T18:47:22.000Z | 2023-05-24T18:47:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 9607911271.0
num_examples: 100000
download_size: 9289147010
dataset_size: 9607911271.0
---
# Dataset Card for "LLaVA_calculus_handwriting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
0.0030863096471875906,
-0.3913119435310364,
0.4895078241825104,
0.25469085574150085,
-0.27641937136650085,
0.20190434157848358,
0.2676665186882019,
-0.17668214440345764,
0.9214650988578796,
0.594763994216919,
-0.8869509100914001,
-0.9995003342628479,
-0.76068115234375,
-0.46749454736709595... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/flickr8k | jxie | 2023-06-25T22:25:03Z | 106 | 0 | null | [
"region:us"
] | 2023-06-25T22:25:03Z | 2023-06-25T19:09:16.000Z | 2023-06-25T19:09:16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption_0
dtype: string
- name: caption_1
dtype: string
- name: caption_2
dtype: string
- name: caption_3
dtype: string
- name: caption_4
dtype: string
splits:
- name: train
num_bytes: 826721431.0
num_examples: 6000
- name: validation
num_bytes: 138017615.0
num_examples: 1000
- name: test
num_bytes: 136871307.0
num_examples: 1000
download_size: 274629589
dataset_size: 1101610353.0
---
# Dataset Card for "flickr8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6903163194656372,
0.07526080310344696,
0.21464581787586212,
0.16154776513576508,
-0.399967759847641,
-0.0655880868434906,
0.5942295789718628,
-0.1657881885766983,
0.6939918994903564,
0.4774690866470337,
-0.8796880841255188,
-0.6436339616775513,
-0.6194800734519958,
-0.17091535031795502,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sezer12138/ADE20k_Segementation | sezer12138 | 2023-07-21T03:06:25Z | 106 | 0 | null | [
"region:us"
] | 2023-07-21T03:06:25Z | 2023-07-19T13:18:55.000Z | 2023-07-19T13:18:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: annotated
dtype: image
- name: Scene_category
dtype:
class_label:
names:
'0': abbey
'1': access_road
'2': acropolis
'3': air_base
'4': aircraft_carrier_object
'5': airfield
'6': airlock
'7': airplane
'8': airplane_cabin
'9': airport
'10': airport_terminal
'11': airport_ticket_counter
'12': alcove
'13': alley
'14': amphitheater
'15': amphitheater_indoor
'16': amusement_arcade
'17': amusement_park
'18': anechoic_chamber
'19': apartment_building_outdoor
'20': apse_indoor
'21': apse_outdoor
'22': aquarium
'23': aquatic_theater
'24': aqueduct
'25': arbor
'26': arcade
'27': arch
'28': archaelogical_excavation
'29': archipelago
'30': archive
'31': armory
'32': army_base
'33': arrival_gate_indoor
'34': arrival_gate_outdoor
'35': art_gallery
'36': art_school
'37': art_studio
'38': artificial
'39': artists_loft
'40': assembly_hall
'41': assembly_line
'42': assembly_plant
'43': athletic_field_indoor
'44': athletic_field_outdoor
'45': atrium_home
'46': atrium_public
'47': attic
'48': auditorium
'49': auto_factory
'50': auto_mechanics_indoor
'51': auto_mechanics_outdoor
'52': auto_racing_paddock
'53': auto_showroom
'54': awning_deck
'55': back_porch
'56': backdrop
'57': backroom
'58': backseat
'59': backstage
'60': backstage_outdoor
'61': backstairs
'62': backstairs_indoor
'63': backwoods
'64': badlands
'65': badminton_court_indoor
'66': badminton_court_outdoor
'67': baggage_claim
'68': balcony_interior
'69': ball_pit
'70': ballet
'71': ballroom
'72': balustrade
'73': bamboo_forest
'74': bank_indoor
'75': bank_outdoor
'76': bank_vault
'77': banquet_hall
'78': baptistry_indoor
'79': baptistry_outdoor
'80': bar
'81': barbeque
'82': barbershop
'83': barn
'84': barndoor
'85': barnyard
'86': barrack
'87': barrel_storage
'88': baseball
'89': baseball_field
'90': basement
'91': basilica
'92': basin_outdoor
'93': basketball
'94': basketball_court_indoor
'95': basketball_court_outdoor
'96': bath_indoor
'97': bath_outdoor
'98': bathhouse
'99': bathhouse_outdoor
'100': bathroom
'101': batters_box
'102': batting_cage_indoor
'103': batting_cage_outdoor
'104': battlefield
'105': battlement
'106': bay
'107': bayou
'108': bazaar_indoor
'109': bazaar_outdoor
'110': beach
'111': beach_house
'112': beauty_salon
'113': bedchamber
'114': bedroom
'115': beer_garden
'116': beer_hall
'117': belfry
'118': bell_foundry
'119': berth
'120': berth_deck
'121': betting_shop
'122': bicycle_racks
'123': bindery
'124': biology_laboratory
'125': bistro_indoor
'126': bistro_outdoor
'127': bleachers_indoor
'128': bleachers_outdoor
'129': block
'130': boardwalk
'131': boat
'132': boat_deck
'133': boathouse
'134': bog
'135': bomb_shelter_indoor
'136': bookbindery
'137': bookshelf
'138': bookstore
'139': booth
'140': booth_indoor
'141': booth_outdoor
'142': botanical_garden
'143': bottle_storage
'144': bottomland
'145': bow_window_indoor
'146': bow_window_outdoor
'147': bowling_alley
'148': box_seat
'149': boxing_ring
'150': breakfast_table
'151': breakroom
'152': brewery_indoor
'153': brewery_outdoor
'154': bric-a-brac
'155': brickyard_indoor
'156': brickyard_outdoor
'157': bridge
'158': bridle_path
'159': broadleaf
'160': brooklet
'161': bubble_chamber
'162': buffet
'163': building_complex
'164': building_facade
'165': bulkhead
'166': bullpen
'167': bullring
'168': bunk_bed
'169': burial_chamber
'170': bus_depot_indoor
'171': bus_depot_outdoor
'172': bus_interior
'173': bus_shelter
'174': bus_station_indoor
'175': bus_station_outdoor
'176': butchers_shop
'177': butte
'178': bypass
'179': byroad
'180': cabana
'181': cabin_cruiser
'182': cabin_indoor
'183': cabin_outdoor
'184': cafeteria
'185': call_center
'186': campsite
'187': campus
'188': candy_store
'189': canteen
'190': canyon
'191': car_dealership
'192': caravansary
'193': cardroom
'194': cargo_container_interior
'195': cargo_deck
'196': cargo_helicopter
'197': carport_indoor
'198': carport_outdoor
'199': carrousel
'200': cascade
'201': casino_indoor
'202': casino_outdoor
'203': castle
'204': catacomb
'205': cataract
'206': cathedral_indoor
'207': cathedral_outdoor
'208': catwalk
'209': cavern_indoor
'210': cavern_outdoor
'211': cellar
'212': cemetery
'213': chair_lift
'214': chalet
'215': chaparral
'216': chapel
'217': checkout_counter
'218': cheese_factory
'219': chemical_plant
'220': chemistry_lab
'221': chicken_coop_indoor
'222': chicken_coop_outdoor
'223': chicken_farm_indoor
'224': chicken_farm_outdoor
'225': childs_room
'226': choir_loft_interior
'227': chuck_wagon
'228': church_indoor
'229': church_outdoor
'230': circus_tent_indoor
'231': circus_tent_outdoor
'232': city
'233': classroom
'234': clean_room
'235': cliff
'236': clock_tower_indoor
'237': cloister_indoor
'238': cloister_outdoor
'239': closet
'240': clothing_store
'241': coast
'242': coast_road
'243': cockpit
'244': cocktail_lounge
'245': coffee_shop
'246': computer_room
'247': conference_center
'248': conference_hall
'249': conference_room
'250': confessional
'251': construction_site
'252': control_room
'253': control_tower_indoor
'254': control_tower_outdoor
'255': convenience_store_indoor
'256': convenience_store_outdoor
'257': coral_reef
'258': corn_field
'259': corner
'260': corral
'261': corridor
'262': cottage
'263': cottage_garden
'264': country_house
'265': country_road
'266': courthouse
'267': courtroom
'268': courtyard
'269': covered_bridge_interior
'270': crawl_space
'271': creek
'272': crevasse
'273': crosswalk
'274': cultivated
'275': customhouse
'276': cybercafe
'277': dacha
'278': dairy_indoor
'279': dairy_outdoor
'280': dam
'281': dance_floor
'282': dance_school
'283': darkroom
'284': day_care_center
'285': deck-house_boat_deck_house
'286': deck-house_deck_house
'287': delicatessen
'288': dentists_office
'289': department_store
'290': departure_lounge
'291': desert_road
'292': diner_indoor
'293': diner_outdoor
'294': dinette_home
'295': dining_area
'296': dining_car
'297': dining_hall
'298': dining_room
'299': dirt_track
'300': discotheque
'301': distillery
'302': ditch
'303': diving_board
'304': dock
'305': dolmen
'306': donjon
'307': door
'308': doorway_indoor
'309': doorway_outdoor
'310': dorm_room
'311': downtown
'312': drainage_ditch
'313': dress_shop
'314': dressing_room
'315': drill_rig
'316': driveway
'317': driving_range_indoor
'318': driving_range_outdoor
'319': drugstore
'320': dry
'321': dry_dock
'322': dugout
'323': earth_fissure
'324': east_asia
'325': editing_room
'326': electrical_substation
'327': elevated_catwalk
'328': elevator_interior
'329': elevator_lobby
'330': elevator_shaft
'331': embankment
'332': embassy
'333': embrasure
'334': engine_room
'335': entrance
'336': entrance_hall
'337': entranceway_indoor
'338': entranceway_outdoor
'339': entryway_outdoor
'340': escalator_indoor
'341': escalator_outdoor
'342': escarpment
'343': establishment
'344': estaminet
'345': estuary
'346': excavation
'347': exhibition_hall
'348': exterior
'349': fabric_store
'350': factory_indoor
'351': factory_outdoor
'352': fairway
'353': fan
'354': farm
'355': farm_building
'356': farmhouse
'357': fastfood_restaurant
'358': feed_bunk
'359': fence
'360': ferryboat_indoor
'361': field_house
'362': field_road
'363': field_tent_indoor
'364': field_tent_outdoor
'365': fire_escape
'366': fire_station
'367': fire_trench
'368': fireplace
'369': firing_range_indoor
'370': firing_range_outdoor
'371': fish_farm
'372': fishmarket
'373': fishpond
'374': fitting_room_interior
'375': fjord
'376': flashflood
'377': flatlet
'378': flea_market_indoor
'379': flea_market_outdoor
'380': floating_dock
'381': floating_dry_dock
'382': flood
'383': flood_plain
'384': florist_shop_indoor
'385': florist_shop_outdoor
'386': flowerbed
'387': flume_indoor
'388': fly_bridge
'389': flying_buttress
'390': food_court
'391': football
'392': football_field
'393': foothill
'394': forecourt
'395': foreshore
'396': forest_fire
'397': forest_path
'398': forest_road
'399': forklift
'400': formal_garden
'401': fort
'402': fortress
'403': foundry_indoor
'404': foundry_outdoor
'405': fountain
'406': freestanding
'407': freeway
'408': freight_elevator
'409': front_porch
'410': frontseat
'411': funeral_chapel
'412': funeral_home
'413': furnace_room
'414': galley
'415': game_room
'416': gangplank
'417': garage_indoor
'418': garage_outdoor
'419': garbage_dump
'420': garden
'421': gas_station
'422': gas_well
'423': gasworks
'424': gate
'425': gatehouse
'426': gazebo_interior
'427': general_store_indoor
'428': general_store_outdoor
'429': geodesic_dome_indoor
'430': geodesic_dome_outdoor
'431': ghost_town
'432': gift_shop
'433': glacier
'434': glade
'435': glen
'436': golf_course
'437': gorge
'438': granary
'439': grape_arbor
'440': great_hall
'441': greengrocery
'442': greenhouse_indoor
'443': greenhouse_outdoor
'444': grotto
'445': grove
'446': guardhouse
'447': guardroom
'448': guesthouse
'449': gulch
'450': gun_deck_indoor
'451': gun_deck_outdoor
'452': gun_store
'453': gymnasium_indoor
'454': gymnasium_outdoor
'455': hacienda
'456': hallway
'457': handball_court
'458': hangar_indoor
'459': hangar_outdoor
'460': harbor
'461': hardware_store
'462': hat_shop
'463': hatchery
'464': hayfield
'465': hayloft
'466': head_shop
'467': hearth
'468': heath
'469': hedge_maze
'470': hedgerow
'471': heliport
'472': hen_yard
'473': herb_garden
'474': highway
'475': hill
'476': hillock
'477': hockey
'478': hollow
'479': home_office
'480': home_theater
'481': hoodoo
'482': hospital
'483': hospital_room
'484': hot_spring
'485': hot_tub_indoor
'486': hot_tub_outdoor
'487': hotel_breakfast_area
'488': hotel_outdoor
'489': hotel_room
'490': house
'491': housing_estate
'492': housing_project
'493': howdah
'494': hunting_lodge_indoor
'495': hunting_lodge_outdoor
'496': hut
'497': hutment
'498': ice_cream_parlor
'499': ice_floe
'500': ice_shelf
'501': ice_skating_rink_indoor
'502': ice_skating_rink_outdoor
'503': iceberg
'504': igloo
'505': imaret
'506': incinerator_indoor
'507': incinerator_outdoor
'508': indoor_procenium
'509': indoor_round
'510': indoor_seats
'511': industrial_area
'512': industrial_park
'513': inlet
'514': inn_indoor
'515': inn_outdoor
'516': insane_asylum
'517': irrigation_ditch
'518': islet
'519': jacuzzi_indoor
'520': jacuzzi_outdoor
'521': jail_cell
'522': jail_indoor
'523': jail_outdoor
'524': japanese_garden
'525': jetty
'526': jewelry_shop
'527': joss_house
'528': juke_joint
'529': jungle
'530': junk_pile
'531': junkyard
'532': jury_box
'533': kasbah
'534': kennel_indoor
'535': kennel_outdoor
'536': kindergarden_classroom
'537': kiosk_indoor
'538': kiosk_outdoor
'539': kitchen
'540': kitchenette
'541': kraal
'542': lab_classroom
'543': laboratorywet
'544': labyrinth_indoor
'545': labyrinth_outdoor
'546': lagoon
'547': landfill
'548': landing
'549': landing_deck
'550': landing_strip
'551': laundromat
'552': lava_flow
'553': lavatory
'554': lawn
'555': layby
'556': lean-to
'557': lean-to_tent
'558': lecture_room
'559': legislative_chamber
'560': levee
'561': library
'562': library_indoor
'563': library_outdoor
'564': lido_deck_indoor
'565': lido_deck_outdoor
'566': lift_bridge
'567': lighthouse
'568': limousine_interior
'569': liquor_store_indoor
'570': liquor_store_outdoor
'571': living_room
'572': loading_dock
'573': lobby
'574': lock_chamber
'575': locker_room
'576': loft
'577': loge
'578': loggia_outdoor
'579': lookout_station_indoor
'580': lookout_station_outdoor
'581': lower_deck
'582': luggage_van
'583': lumberyard_indoor
'584': lumberyard_outdoor
'585': lyceum
'586': machine_shop
'587': manhole
'588': mansard
'589': mansion
'590': manufactured_home
'591': market_indoor
'592': market_outdoor
'593': marsh
'594': martial_arts_gym
'595': massage_room
'596': mastaba
'597': maternity_ward
'598': mausoleum
'599': meadow
'600': meat_house
'601': medina
'602': megalith
'603': menhir
'604': mens_store_outdoor
'605': mental_institution_indoor
'606': mental_institution_outdoor
'607': mesa
'608': mesoamerican
'609': mess_hall
'610': mews
'611': mezzanine
'612': military_headquarters
'613': military_hospital
'614': military_hut
'615': military_tent
'616': millpond
'617': millrace
'618': mine
'619': mineral_bath
'620': mineshaft
'621': mini_golf_course_indoor
'622': mini_golf_course_outdoor
'623': misc
'624': mission
'625': mobile_home
'626': monastery_indoor
'627': monastery_outdoor
'628': moon_bounce
'629': moor
'630': morgue
'631': mosque_indoor
'632': mosque_outdoor
'633': motel
'634': mountain
'635': mountain_path
'636': mountain_road
'637': mountain_snowy
'638': movie_theater_indoor
'639': movie_theater_outdoor
'640': mudflat
'641': museum_indoor
'642': museum_outdoor
'643': music_store
'644': music_studio
'645': natural
'646': natural_history_museum
'647': natural_spring
'648': naval_base
'649': needleleaf
'650': newsroom
'651': newsstand_indoor
'652': newsstand_outdoor
'653': nightclub
'654': nook
'655': nuclear_power_plant_indoor
'656': nuclear_power_plant_outdoor
'657': nunnery
'658': nursery
'659': nursing_home
'660': nursing_home_outdoor
'661': oasis
'662': oast_house
'663': observation_station
'664': observatory_indoor
'665': observatory_outdoor
'666': observatory_post
'667': ocean
'668': ocean_deep
'669': ocean_shallow
'670': office
'671': office_building
'672': office_cubicles
'673': oil_refinery_indoor
'674': oil_refinery_outdoor
'675': oilrig
'676': one-way_street
'677': open-hearth_furnace
'678': operating_room
'679': operating_table
'680': optician
'681': orchard
'682': orchestra_pit
'683': organ_loft_interior
'684': orlop_deck
'685': ossuary
'686': outbuilding
'687': outcropping
'688': outhouse_indoor
'689': outhouse_outdoor
'690': outside
'691': overpass
'692': oyster_bar
'693': oyster_farm
'694': packaging_plant
'695': pagoda
'696': palace
'697': palace_hall
'698': palestra
'699': pantry
'700': paper_mill
'701': parade_ground
'702': park
'703': parking_garage_indoor
'704': parking_garage_outdoor
'705': parking_lot
'706': parkway
'707': parlor
'708': particle_accelerator
'709': party_tent_indoor
'710': party_tent_outdoor
'711': passenger_deck
'712': pasture
'713': patio
'714': patio_indoor
'715': pavement
'716': pavilion
'717': pawnshop
'718': pawnshop_outdoor
'719': pedestrian_overpass_indoor
'720': penalty_box
'721': performance
'722': perfume_shop
'723': pet_shop
'724': pharmacy
'725': phone_booth
'726': physics_laboratory
'727': piano_store
'728': picnic_area
'729': pier
'730': pig_farm
'731': pilothouse_indoor
'732': pilothouse_outdoor
'733': pinetum
'734': piste_road
'735': pitchers_mound
'736': pizzeria
'737': pizzeria_outdoor
'738': planetarium_indoor
'739': planetarium_outdoor
'740': plantation_house
'741': platform
'742': playground
'743': playroom
'744': plaza
'745': plunge
'746': podium_indoor
'747': podium_outdoor
'748': police_station
'749': pond
'750': pontoon_bridge
'751': poolroom_home
'752': poop_deck
'753': porch
'754': portico
'755': portrait_studio
'756': postern
'757': powder_room
'758': power_plant_outdoor
'759': preserve
'760': print_shop
'761': priory
'762': promenade
'763': promenade_deck
'764': pub_indoor
'765': pub_outdoor
'766': pueblo
'767': pulpit
'768': pump_room
'769': pumping_station
'770': putting_green
'771': quadrangle
'772': questionable
'773': quicksand
'774': quonset_hut_indoor
'775': quonset_hut_outdoor
'776': racecourse
'777': raceway
'778': raft
'779': rail_indoor
'780': rail_outdoor
'781': railroad_track
'782': railway_yard
'783': rainforest
'784': ramp
'785': ranch
'786': ranch_house
'787': reading_room
'788': reception
'789': reception_room
'790': recreation_room
'791': rectory
'792': recycling_plant_indoor
'793': recycling_plant_outdoor
'794': refectory
'795': repair_shop
'796': residential_neighborhood
'797': resort
'798': rest_area
'799': rest_stop
'800': restaurant
'801': restaurant_kitchen
'802': restaurant_patio
'803': restroom_indoor
'804': restroom_outdoor
'805': retaining_wall
'806': revolving_door
'807': rice_paddy
'808': riding_arena
'809': rift_valley
'810': river
'811': road
'812': road_cut
'813': road_indoor
'814': road_outdoor
'815': rock_arch
'816': rock_garden
'817': rodeo
'818': roller_skating_rink_indoor
'819': roller_skating_rink_outdoor
'820': rolling_mill
'821': roof
'822': roof_garden
'823': room
'824': root_cellar
'825': rope_bridge
'826': rotisserie
'827': roundabout
'828': roundhouse
'829': rubble
'830': ruin
'831': runway
'832': sacristy
'833': safari_park
'834': salon
'835': saloon
'836': salt_plain
'837': sanatorium
'838': sand
'839': sand_trap
'840': sandbar
'841': sandbox
'842': sauna
'843': savanna
'844': sawmill
'845': schoolhouse
'846': schoolyard
'847': science_laboratory
'848': science_museum
'849': scriptorium
'850': scrubland
'851': scullery
'852': sea_cliff
'853': seaside
'854': seawall
'855': security_check_point
'856': semidesert
'857': server_room
'858': sewer
'859': sewing_room
'860': shed
'861': shelter
'862': shelter_deck
'863': shelter_tent
'864': shipping_room
'865': shipyard_outdoor
'866': shoe_shop
'867': shop
'868': shopfront
'869': shopping_mall_indoor
'870': shopping_mall_outdoor
'871': shore
'872': shower
'873': shower_room
'874': shrine
'875': shrubbery
'876': sidewalk
'877': signal_box
'878': sinkhole
'879': ski_jump
'880': ski_lodge
'881': ski_resort
'882': ski_slope
'883': sky
'884': skyscraper
'885': skywalk_indoor
'886': skywalk_outdoor
'887': slum
'888': snack_bar
'889': snowbank
'890': snowfield
'891': soccer
'892': south_asia
'893': spillway
'894': sporting_goods_store
'895': squash_court
'896': stable
'897': stadium_outdoor
'898': stage_indoor
'899': stage_outdoor
'900': stage_set
'901': staircase
'902': stall
'903': starting_gate
'904': stateroom
'905': station
'906': steam_plant_outdoor
'907': steel_mill_indoor
'908': steel_mill_outdoor
'909': stone_circle
'910': storage_room
'911': store
'912': storm_cellar
'913': street
'914': streetcar_track
'915': strip_mall
'916': strip_mine
'917': student_center
'918': student_residence
'919': study_hall
'920': submarine_interior
'921': subway_interior
'922': sugar_refinery
'923': sun_deck
'924': sunroom
'925': supermarket
'926': supply_chamber
'927': sushi_bar
'928': swamp
'929': swimming_hole
'930': swimming_pool_indoor
'931': swimming_pool_outdoor
'932': synagogue_indoor
'933': synagogue_outdoor
'934': t-bar_lift
'935': tannery
'936': taxistand
'937': taxiway
'938': tea_garden
'939': teahouse
'940': tearoom
'941': teashop
'942': television_room
'943': television_studio
'944': tennis_court_indoor
'945': tennis_court_outdoor
'946': tent_outdoor
'947': terrace_farm
'948': theater_outdoor
'949': threshing_floor
'950': thriftshop
'951': throne_room
'952': ticket_booth
'953': ticket_window_indoor
'954': tidal_basin
'955': tidal_river
'956': tiltyard
'957': tobacco_shop_indoor
'958': toll_plaza
'959': tollbooth
'960': tollgate
'961': tomb
'962': topiary_garden
'963': tower
'964': town_house
'965': toyshop
'966': track_outdoor
'967': tract_housing
'968': trading_floor
'969': traffic_island
'970': trailer_park
'971': train_interior
'972': train_railway
'973': train_station_outdoor
'974': tree_farm
'975': tree_house
'976': trellis
'977': trench
'978': trestle_bridge
'979': truck_stop
'980': tundra
'981': turkish_bath
'982': upper_balcony
'983': urban
'984': utility_room
'985': valley
'986': van_interior
'987': vat
'988': vegetable_garden
'989': vegetation
'990': vehicle
'991': velodrome_indoor
'992': velodrome_outdoor
'993': ventilation_shaft
'994': veranda
'995': vestibule
'996': vestry
'997': veterinarians_office
'998': viaduct
'999': videostore
'1000': village
'1001': vinery
'1002': vineyard
'1003': volcano
'1004': volleyball_court_indoor
'1005': volleyball_court_outdoor
'1006': voting_booth
'1007': waiting_room
'1008': walk_in_freezer
'1009': walkway
'1010': war_room
'1011': warehouse_indoor
'1012': warehouse_outdoor
'1013': washhouse_indoor
'1014': washhouse_outdoor
'1015': washroom
'1016': watchtower
'1017': water
'1018': water_fountain
'1019': water_gate
'1020': water_mill
'1021': water_park
'1022': water_tower
'1023': water_treatment_plant_indoor
'1024': water_treatment_plant_outdoor
'1025': watering_hole
'1026': waterscape
'1027': waterway
'1028': wave
'1029': weighbridge
'1030': western
'1031': wet_bar
'1032': wetland
'1033': wharf
'1034': wheat_field
'1035': whispering_gallery
'1036': widows_walk_indoor
'1037': widows_walk_interior
'1038': wild
'1039': wind_farm
'1040': windmill
'1041': window_seat
'1042': windstorm
'1043': winery
'1044': witness_stand
'1045': woodland
'1046': workroom
'1047': workshop
'1048': wrestling_ring_indoor
'1049': wrestling_ring_outdoor
'1050': yard
'1051': youth_hostel
'1052': zen_garden
'1053': ziggurat
'1054': zoo
splits:
- name: train
num_bytes: 1097055005.51
num_examples: 20210
- name: val
num_bytes: 90418264.0
num_examples: 2000
download_size: 966605341
dataset_size: 1187473269.51
---
# Dataset Card for "ADE20k_Segementation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7812451124191284,
-0.2494310885667801,
0.19671320915222168,
0.3641684353351593,
-0.06549260020256042,
-0.10741525888442993,
0.3720536231994629,
-0.2633666694164276,
0.8250465989112854,
0.6021966338157654,
-1.000504732131958,
-0.7909559011459351,
-0.4609445333480835,
-0.25543925166130066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sentdex/wsb_reddit_v002 | Sentdex | 2023-08-26T17:44:09Z | 106 | 6 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-26T17:44:09Z | 2023-08-26T17:43:17.000Z | 2023-08-26T17:43:17 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dbaezaj/lince_ner_dataset | dbaezaj | 2023-11-15T17:11:50Z | 106 | 0 | null | [
"region:us"
] | 2023-11-15T17:11:50Z | 2023-11-14T18:31:38.000Z | 2023-11-14T18:31:38 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: lid
sequence: string
- name: labels
sequence: string
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joseluhf11/oct-object-detection-v3-merge | joseluhf11 | 2023-11-22T08:49:02Z | 106 | 0 | null | [
"region:us"
] | 2023-11-22T08:49:02Z | 2023-11-20T09:28:44.000Z | 2023-11-20T09:28:44 | ---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: int64
- name: categories
sequence: string
splits:
- name: train
num_bytes: 154014595.25
num_examples: 1246
download_size: 71638878
dataset_size: 154014595.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oct-object-detection-v3-merge"
Dataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The unique difference from from v2 is categories field must have as many class label as there are boxes annotated in each image, even if the class label is the same. So for a image with 3 boxes for the same object, must have 3 class labels.
[Source datataset](https://doi.org/10.1101/2023.03.29.534704)
| [
-0.7950263023376465,
-0.4776192009449005,
0.1384594589471817,
-0.19305674731731415,
-0.5387946367263794,
0.14922872185707092,
0.6407327651977539,
-0.9068121314048767,
0.05128024145960808,
0.8958475589752197,
-0.3687947988510132,
-0.7022929191589355,
-0.5679070949554443,
0.4588826298713684,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
msklar/skribbl-drawings | msklar | 2023-11-27T00:48:35Z | 106 | 0 | null | [
"region:us"
] | 2023-11-27T00:48:35Z | 2023-11-22T21:09:25.000Z | 2023-11-22T21:09:25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1263264.0
num_examples: 304
download_size: 1043652
dataset_size: 1263264.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Toygar/turkish-offensive-language-detection | Toygar | 2023-10-31T21:57:24Z | 105 | 5 | null | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-2.0",
"offensive-language-classification",
"region:us"
] | 2023-10-31T21:57:24Z | 2022-07-28T11:45:25.000Z | 2022-07-28T11:45:25 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
language:
- tr
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids: []
pretty_name: Turkish Offensive Language Detection Dataset
tags:
- offensive-language-classification
---
# Dataset Summary
This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.
In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.
The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets.
# Dataset Structure
A binary dataset with with (0) Not Offensive and (1) Offensive tweets.
### Task and Labels
Offensive language identification:
- (0) Not Offensive - Tweet does not contain offense or profanity.
- (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense
### Data Splits
| | train | test | dev |
|------:|:------|:-----|:-----|
| 0 (Not Offensive) | 22,589 | 4,436 | 1,402 |
| 1 (Offensive) | 19,809 | 4,415 | 354 |
### Citation Information
```
T. Tanyel, B. Alkurdi and S. Ayvaz, "Linguistic-based Data Augmentation Approach for Offensive Language Detection," 2022 7th International Conference on Computer Science and Engineering (UBMK), 2022, pp. 1-6, doi: 10.1109/UBMK55850.2022.9919562.
```
### Paper codes
https://github.com/tanyelai/lingda
# References
We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.
- https://huggingface.co/datasets/offenseval2020_tr
- https://github.com/imayda/turkish-hate-speech-dataset-2
- https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
| [
-0.23120826482772827,
-0.9223470091819763,
-0.34582263231277466,
0.19867666065692902,
-0.3965538442134857,
-0.06915327161550522,
-0.36435937881469727,
-0.6777725219726562,
0.18474121391773224,
0.6181879043579102,
-0.5446996092796326,
-0.8314554691314697,
-0.7553253769874573,
0.077426120638... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
google/cvss | google | 2022-08-27T23:19:14Z | 105 | 8 | null | [
"license:cc-by-4.0",
"arxiv:2201.03713",
"region:us"
] | 2022-08-27T23:19:14Z | 2022-08-11T00:54:54.000Z | 2022-08-11T00:54:54 | ---
license: cc-by-4.0
---
# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus.
CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
- *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
- *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
# Load the data
The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names.
```py
from datasets import load_dataset
# Load only ar-en and ja-en language pairs. Omitting the `languages` argument
# would load all the language pairs.
cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja'])
# Print the structure of the dataset.
print(cvss_c)
```
# License
CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
## Citation
Please cite this paper when referencing the CVSS corpus:
```
@inproceedings{jia2022cvss,
title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
pages={6691--6703},
year={2022}
}
```
| [
-0.1917620152235031,
-0.44338828325271606,
0.08432500064373016,
0.22212915122509003,
-0.3151954114437103,
0.1597299873828888,
-0.5229386687278748,
-0.26502782106399536,
0.3362143933773041,
0.4924840033054352,
-0.4881972074508667,
-0.6157721877098083,
-0.36783915758132935,
0.190645039081573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matt-facet/green-tent-small | matt-facet | 2022-09-13T20:28:08Z | 105 | 0 | null | [
"region:us"
] | 2022-09-13T20:28:08Z | 2022-09-13T20:26:03.000Z | 2022-09-13T20:26:03 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1650000-1700000 | tomekkorbak | 2022-10-05T00:01:13Z | 105 | 0 | null | [
"region:us"
] | 2022-10-05T00:01:13Z | 2022-10-05T00:01:04.000Z | 2022-10-05T00:01:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
relbert/t_rex | relbert | 2023-03-31T21:02:35Z | 105 | 1 | null | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | 2023-03-31T21:02:35Z | 2023-01-25T21:47:54.000Z | 2023-01-25T21:47:54 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: relbert/t_rex
---
# Dataset Card for "relbert/t_rex"
## Dataset Description
- **Repository:** [https://hadyelsahar.github.io/t-rex/](https://hadyelsahar.github.io/t-rex/)
- **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/)
- **Dataset:** Cleaned T-REX for link prediction.
## Dataset Summary
This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/).
The test split is universal across different version, which is manually checked by the author of [relbert/t_rex](https://huggingface.co/datasets/relbert/t_rex),
and the test split contains predicates that is not included in the train/validation split.
The number of triples in each split is summarized in the table below.
***Note:*** To make it consistent with other datasets ([nell](https://huggingface.co/datasets/relbert/nell) and [conceptnet](https://huggingface.co/datasets/relbert/conceptnet)), we rename predicate/subject/object as relation/head/tail.
- Number of instances
| | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| number of triples | 1,274,264 | 318,566 | 122 |
| number of unique relation types (predicate) | 759 | 676 | 34 |
### Filtering to Remove Noise
We apply filtering to keep triples with named-entities in either of head or tail (`named-entity filter`).
Then, we remove predicates if they have less than three triples (`rare-predicate filter`).
After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation [here](https://huggingface.co/datasets/relbert/t_rex/raw/main/predicate_manual_check.csv)).
Finally, we remove triples that contain enties that has frequency less than 5 (`frequnecy`).
| Dataset | `raw` | `named-entity filter` | `rare-predicate` | `unify-denoise-predicate` | `frequnecy` |
|:----------|-----------:|-----------------------:|-----------------:|--------------------------:|------------:|
| Triples | 20,877,472 | 12,561,573 | 12,561,250 | 12,410,726 | 1,616,065 |
| Predicate | 1,616 | 1,470 | 1,237 | 839 | 839 |
## Dataset Structure
An example looks as follows.
```shell
{
"tail": "Persian",
"head": "Tajik",
"title": "Tandoor bread",
"text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.",
"relation": "[Artifact] is a type of [Type]"
}
```
## Reproduce the Dataset
```shell
git clone https://huggingface.co/datasets/relbert/t_rex
cd t_rex
mkdir data_raw
cd data_raw
cd data_raw
wget https://figshare.com/ndownloader/files/8760241
unzip 8760241
cd ../
python process.py
python unify_predicate.py
python min_entity_filter.py
python create_split.py
```
## Citation Information
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
| [
-0.48109373450279236,
-0.6698240041732788,
0.2197985053062439,
0.1369161307811737,
-0.19288575649261475,
0.006440817378461361,
-0.14276781678199768,
-0.412480890750885,
0.6103194355964661,
0.4496235251426697,
-0.5649908781051636,
-0.8911405801773071,
-0.5098463892936707,
0.2731203436851501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/RSI-CB256 | jonathan-roberts1 | 2023-03-31T17:11:50Z | 105 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T17:11:50Z | 2023-02-14T19:09:45.000Z | 2023-02-14T19:09:45 | ---
dataset_info:
features:
- name: label_1
dtype:
class_label:
names:
'0': transportation
'1': other objects
'2': woodland
'3': water area
'4': other land
'5': cultivated land
'6': construction land
- name: label_2
dtype:
class_label:
names:
'0': parking lot
'1': avenue
'2': highway
'3': bridge
'4': marina
'5': crossroads
'6': airport runway
'7': pipeline
'8': town
'9': airplane
'10': forest
'11': mangrove
'12': artificial grassland
'13': river protection forest
'14': shrubwood
'15': sapling
'16': sparse forest
'17': lakeshore
'18': river
'19': stream
'20': coastline
'21': hirst
'22': dam
'23': sea
'24': snow mountain
'25': sandbeach
'26': mountain
'27': desert
'28': dry farm
'29': green farmland
'30': bare land
'31': city building
'32': residents
'33': container
'34': storage room
- name: image
dtype: image
splits:
- name: train
num_bytes: 4901667781.625
num_examples: 24747
download_size: 4198991130
dataset_size: 4901667781.625
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "RSI-CB256"
## Dataset Description
- **Paper** [Exploring Models and Data for Remote Sensing Image Caption Generation](https://ieeexplore.ieee.org/iel7/36/4358825/08240966.pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[Exploring Models and Data for Remote Sensing Image Caption Generation](https://ieeexplore.ieee.org/iel7/36/4358825/08240966.pdf)
```
@article{lu2017exploring,
title = {Exploring Models and Data for Remote Sensing Image Caption Generation},
author = {Lu, Xiaoqiang and Wang, Binqiang and Zheng, Xiangtao and Li, Xuelong},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
volume = 56,
number = 4,
pages = {2183--2195},
doi = {10.1109/TGRS.2017.2776321},
year={2018}
}
``` | [
-0.41787898540496826,
-0.12999078631401062,
0.21265387535095215,
-0.04738415777683258,
-0.7317808866500854,
-0.13392287492752075,
-0.07492022961378098,
-0.268230676651001,
-0.27559328079223633,
0.5410178303718567,
-0.7569433450698853,
-0.7092769145965576,
-0.3812459111213684,
0.21424412727... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
timworks/massive-dataset | timworks | 2023-03-04T11:30:45Z | 105 | 0 | null | [
"region:us"
] | 2023-03-04T11:30:45Z | 2023-03-04T08:43:38.000Z | 2023-03-04T08:43:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
amlan107/xyz | amlan107 | 2023-08-22T14:33:38Z | 105 | 0 | null | [
"region:us"
] | 2023-08-22T14:33:38Z | 2023-05-19T12:08:49.000Z | 2023-05-19T12:08:49 | <!--
---
dataset_info:
features:
- name: bn
dtype: string
- name: en
dtype: string
- name: ck
dtype: string
splits:
- name: parallel
num_bytes: 2482778
num_examples: 15021
- name: monolingual
num_bytes: 44194898
num_examples: 150000
- name: benchmark
num_bytes: 469802
num_examples: 600
download_size: 24263533
dataset_size: 47147478
---
# Dataset Card for "ck_bn_en_nmt_dataset"
This Dataset contains parallel, monolingual and a benchmark set of Chakma to Bangla or English and vice-versa. More details later....<br>
<br>
Total bn-ck-en parallel sentences/segments: 8647 (first 8647/15021 of the parallel set, 3444(common people online) + 5203(local experts))<br>
Total bn-ck parallel sentences/segments: 6374 (bottom 6374 of the parallel set, 620(UN crpd) + 281(cupdf) + 5473(dictionary))<br>
<br>
Total bn-ck-en benchmark sentences/segments: 600 (200 + 200 + 200, each 200 from 1 expert, and bottom 50 from each 200 have same root sentence(bn & en))<br>
<br>
Total bn monolingual sentences/segments: 150000<br>
Total en monolingual sentences/segments: 150000<br>
Total ck monolingual sentences/segments: 42783<br>
<br>
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
--> | [
-0.7303711175918579,
-0.7344590425491333,
0.013995710760354996,
0.6225209832191467,
-0.5852376818656921,
0.006153269205242395,
-0.5542513728141785,
-0.3519112765789032,
0.4819793999195099,
0.5077036023139954,
-0.6212890148162842,
-0.8150929808616638,
-0.653908371925354,
0.3067200183868408,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bri25yu-temp/bactrian | bri25yu-temp | 2023-09-20T07:12:07Z | 105 | 0 | null | [
"region:us"
] | 2023-09-20T07:12:07Z | 2023-09-20T06:55:27.000Z | 2023-09-20T06:55:27 | ---
dataset_info:
features:
- name: prompts
sequence: string
- name: completions
sequence: string
splits:
- name: train
num_bytes: 545587063
num_examples: 92511
download_size: 236177873
dataset_size: 545587063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bactrian"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5310459733009338,
-0.26359277963638306,
0.18164508044719696,
0.13347940146923065,
-0.31937211751937866,
0.12118982523679733,
0.07039626687765121,
-0.13040287792682648,
0.893127977848053,
0.4427669942378998,
-0.7563579082489014,
-0.664627194404602,
-0.5872141122817993,
-0.246707856655120... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.