id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
DingZhaohai/emotion | 2022-08-04T13:43:16.000Z | [
"region:us"
] | DingZhaohai | null | null | null | 1 | 13 | Entry not found |
kumapo/stair_captions_dataset_script | 2022-08-21T06:20:03.000Z | [
"license:cc-by-4.0",
"region:us"
] | kumapo | COCO is a large-scale object detection, segmentation, and captioning dataset. | @InProceedings{Yoshikawa2017,
title = {STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {417--421},
url = {http://www.aclweb.org/anthology/P17-2066}
} | null | 0 | 13 | ---
license: cc-by-4.0
---
|
unpredictable/unpredictable_support-google-com | 2022-08-28T18:25:26.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | unpredictable | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | null | null | 0 | 13 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-support-google-com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Licensing Information
Apache 2.0 |
bigscience/xP3megds | 2023-05-30T15:52:11.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"lan... | bigscience | null | null | null | 0 | 13 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
cjvt/ssj500k | 2022-12-09T08:58:50.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categ... | cjvt | The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,
sentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated
with syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also
annotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the
JOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies. | @InProceedings{krek2020ssj500k,
title = {The ssj500k Training Corpus for Slovene Language Processing},
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
year={2020},
pages={24-33}
} | null | 0 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
pretty_name: ssj500k
tags:
- semantic-role-labeling
- multiword-expression-detection
---
# Dataset Card for ssj500k
**Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.
### Dataset Summary
The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:
- named entity recognition (config `named_entity_recognition`)
- dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`)
- dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`)
- semantic role labeling (config `semantic_role_labeling`)
- multi-word expressions (config `multiword_expressions`)
If you want to load all the data along with their partial annotations, please use the config `all_data`.
\* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._
### Supported Tasks and Leaderboards
Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset (using the config `all_data`):
```
{
'id_doc': 'ssj1',
'idx_par': 0,
'idx_sent': 0,
'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'],
'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'],
'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'],
'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'],
'has_ne_ann': True,
'has_ud_dep_ann': True,
'has_jos_dep_ann': True,
'has_srl_ann': True,
'has_mwe_ann': True,
'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'],
'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5],
'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'],
'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1],
'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'],
'srl_info': [
{'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'},
{'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'},
{'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'},
{'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'},
{'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'},
{'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'}
],
'mwe_info': [
{'type': 'IRV', 'word_indices': [7, 8]}
]
}
```
### Data Fields
The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs.
- `id_doc`: a string containing the identifier of the document;
- `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of;
- `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph;
- `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149;
- `words`: a list of strings containing the words in the current sentence;
- `lemmas`: a list of strings containing the lemmas in the current sentence;
- `msds`: a list of strings containing the morphosyntactic description of words in the current sentence;
- `has_ne_ann`: a bool indicating whether the current example has named entities annotated;
- `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated;
- `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated;
- `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated;
- `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated;
- `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`;
- `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`;
- `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`;
- `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`;
- `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`;
- `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty;
- `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression;
#### Data fields in 'named_entity_recognition'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags']
```
#### Data fields in 'dependency_parsing_ud'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel']
```
#### Data fields in 'dependency_parsing_jos'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel']
```
#### Data fields in 'semantic_role_labeling'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info']
```
#### Data fields in 'multiword_expressions'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info']
```
## Additional Information
### Dataset Curators
Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
The paper describing the dataset:
```
@InProceedings{krek2020ssj500k,
title = {The ssj500k Training Corpus for Slovene Language Processing},
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
year={2020},
pages={24-33}
}
```
The resource itself:
```
@misc{krek2021clarinssj500k,
title = {Training corpus ssj500k 2.3},
author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja},
url = {http://hdl.handle.net/11356/1434},
year = {2021} }
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. |
andrewkroening/538-NBA-Historical-Raptor | 2022-11-06T22:14:56.000Z | [
"license:cc",
"region:us"
] | andrewkroening | null | null | null | 0 | 13 | ---
license: cc
---
## Dataset Overview
### Intro
This dataset was downloaded from the good folks at fivethirtyeight. You can find the original (or in the future, updated) versions of this and several similar datasets at [this GitHub link.](https://github.com/fivethirtyeight/data/tree/master/nba-raptor)
### Data layout
Here are the columns in this dataset, which contains data on every NBA player, broken out by season, since the 1976 NBA-ABA merger:
Column | Description
-------|---------------
`player_name` | Player name
`player_id` | Basketball-Reference.com player ID
`season` | Season
`season_type` | Regular season (RS) or playoff (PO)
`team` | Basketball-Reference ID of team
`poss` | Possessions played
`mp` | Minutes played
`raptor_box_offense` | Points above average per 100 possessions added by player on offense, based only on box score estimate
`raptor_box_defense` | Points above average per 100 possessions added by player on defense, based only on box score estimate
`raptor_box_total` | Points above average per 100 possessions added by player, based only on box score estimate
`raptor_onoff_offense` | Points above average per 100 possessions added by player on offense, based only on plus-minus data
`raptor_onoff_defense` | Points above average per 100 possessions added by player on defense, based only on plus-minus data
`raptor_onoff_total` | Points above average per 100 possessions added by player, based only on plus-minus data
`raptor_offense` | Points above average per 100 possessions added by player on offense, using both box and on-off components
`raptor_defense` | Points above average per 100 possessions added by player on defense, using both box and on-off components
`raptor_total` | Points above average per 100 possessions added by player on both offense and defense, using both box and on-off components
`war_total` | Wins Above Replacement between regular season and playoffs
`war_reg_season` | Wins Above Replacement for regular season
`war_playoffs` | Wins Above Replacement for playoffs
`predator_offense` | Predictive points above average per 100 possessions added by player on offense
`predator_defense` | Predictive points above average per 100 possessions added by player on defense
`predator_total` | Predictive points above average per 100 possessions added by player on both offense and defense
`pace_impact` | Player impact on team possessions per 48 minutes
### More information
This dataset was put together for Hugging Face by this guy: [Andrew Kroening](https://github.com/andrewkroening)
He was building some kind of a silly tool using this dataset. It's an NBA WAR Predictor tool, and you can find the Gradio interface [here.](https://huggingface.co/spaces/andrewkroening/nba-war-predictor) The GitHub repo can be found [here.](https://github.com/andrewkroening/nba-war-predictor-tool) |
drt/kqa_pro | 2022-10-20T19:35:20.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"k... | drt | A large-scale, diverse, challenging dataset of complex question answering over knowledge base. | @inproceedings{KQAPro,
title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
booktitle={ACL'22},
year={2022}
} | null | 2 | 13 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: KQA-Pro
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- knowledge graph
- freebase
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for KQA Pro
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Configs](#data-configs)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs)
- [Knowledge Graph File](#knowledge-graph-file)
- [How to Submit to Leaderboard](#how-to-submit-results-of-test-set)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://thukeg.gitee.io/kqa-pro/
- **Repository:** https://github.com/shijx12/KQAPro_Baselines
- **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/)
- **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html
- **Point of Contact:** shijx12 at gmail dot com
### Dataset Summary
KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question.
### Supported Tasks and Leaderboards
It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question.
### Languages
English
## Dataset Structure
**train.json/val.json**
```
[
{
'question': str,
'sparql': str, # executable in our virtuoso engine
'program':
[
{
'function': str, # function name
'dependencies': [int], # functional inputs, representing indices of the preceding functions
'inputs': [str], # textual inputs
}
],
'choices': [str], # 10 answer choices
'answer': str, # golden answer
}
]
```
**test.json**
```
[
{
'question': str,
'choices': [str], # 10 answer choices
}
]
```
### Data Configs
This dataset has two configs: `train_val` and `test` because they have different available fields. Please specify this like `load_dataset('drt/kqa_pro', 'train_val')`.
### Data Splits
train, val, test
## Additional Information
### Knowledge Graph File
You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format:
```json
{
'concepts':
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
}
},
'entities': # excluding concepts
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
'attributes':
[
{
'key': str, # attribute key
'value': # attribute value
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date
'unit': str, # for quantity
},
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
'relations':
[
{
'predicate': str,
'object': '<id>', # NOTE: it may be a concept id
'direction': 'forward'/'backward',
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
}
}
}
```
### How to run SPARQLs and programs
We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser.
In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git).
You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer.
In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer.
Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875).
### How to submit results of test set
You need to predict answers for all questions of test set and write them in a text file **in order**, one per line.
Here is an example:
```
Tron: Legacy
Palm Beach County
1937-03-01
The Queen
...
```
Then you need to send the prediction file to us by email <caosl19@mails.tsinghua.edu.cn>, we will reply to you with the performance as soon as possible.
To appear in the learderboard, you need to also provide following information:
- model name
- affiliation
- open-ended or multiple-choice
- whether use the supervision of SPARQL in your model or not
- whether use the supervision of program in your model or not
- single model or ensemble model
- (optional) paper link
- (optional) code link
### Licensing Information
MIT License
### Citation Information
If you find our dataset is helpful in your work, please cite us by
```
@inproceedings{KQAPro,
title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
booktitle={ACL'22},
year={2022}
}
```
### Contributions
Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
|
Rosenberg/nyt | 2022-10-23T13:06:28.000Z | [
"region:us"
] | Rosenberg | null | null | null | 0 | 13 | Entry not found |
arbml/TSAC | 2022-10-24T16:30:35.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 13 | Entry not found |
nhernandez99/sroie_dataset | 2022-11-02T18:24:32.000Z | [
"region:us"
] | nhernandez99 | null | null | null | 0 | 13 | Entry not found |
bigbio/sciq | 2022-12-22T15:46:48.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | bigbio | The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided. | @inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
} | null | 1 | 13 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_3p0
pretty_name: SciQ
homepage: https://allenai.org/data/sciq
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for SciQ
## Dataset Description
- **Homepage:** https://allenai.org/data/sciq
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided.
## Citation Information
```
@inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
}
```
|
keremberke/clash-of-clans-object-detection | 2023-01-29T12:38:03.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
] | keremberke | null | @misc{ clash-of-clans-vop4y_dataset,
title = { Clash of Clans Dataset },
type = { Open Source Dataset },
author = { Find This Base },
howpublished = { \\url{ https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y } },
url = { https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { feb },
note = { visited on 2023-01-18 },
} | null | 2 | 13 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Gaming
---
<div align="center">
<img width="640" alt="keremberke/clash-of-clans-object-detection" src="https://huggingface.co/datasets/keremberke/clash-of-clans-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['ad', 'airsweeper', 'bombtower', 'canon', 'clancastle', 'eagle', 'inferno', 'kingpad', 'mortar', 'queenpad', 'rcpad', 'scattershot', 'th13', 'wardenpad', 'wizztower', 'xbow']
```
### Number of Images
```json
{'train': 88, 'test': 13, 'valid': 24}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/clash-of-clans-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y/dataset/5](https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y/dataset/5?ref=roboflow2huggingface?ref=roboflow2huggingface)
### Citation
```
@misc{ clash-of-clans-vop4y_dataset,
title = { Clash of Clans Dataset },
type = { Open Source Dataset },
author = { Find This Base },
howpublished = { \\url{ https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y } },
url = { https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { feb },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 4:31 PM GMT
It includes 125 images.
CoC are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 1920x1920 (Fit (black edges))
No image augmentation techniques were applied.
|
jordiclive/scored_summarization_datasets | 2023-02-05T16:14:10.000Z | [
"region:us"
] | jordiclive | null | null | null | 2 | 13 | # Dataset Card for "Scored-Summarization-datasets"
A collection of Text summarization datasets geared towards training a multi-purpose text summarizer.
Each dataset is a parquet file with the following features.
#### default
- `text`: a `string` feature. The `source` document
- `summary`: a `string` feature. The summary of the document
- `provenance`: a `string` feature. Information about the sub dataset.
- `t5_text_token_count`: a `int64` feature. The number of tokens the text is encoded in.
- `t5_summary_token_count `: a `int64` feature. The number of tokens the summary is encoded in.
- `contriever_cos`: a `float64` feature. The Cosine Similarity of the Contriever text embedding and Contriever summary embedding.
### Sub-datasets
- billsum
- cnn_dailymail/3.0.0
- multixscience
- newsroom
- samsum
- scitldr/AIC
- tldr-challenge
- wikihow
- xsum
Information about the Contriever model can be found here: https://github.com/facebookresearch/contriever.
|
relbert/semeval2012_relational_similarity | 2023-02-02T15:38:26.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/) | @inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
} | null | 1 | 13 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: SemEval2012 relational similarity dataset
---
# Dataset Card for "relbert/semeval2012_relational_similarity"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012 relational similarity dataset
### Dataset Summary
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```shell
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
|train|validation|
|----:|---------:|
| 79 | 79 |
## Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` |
gfhayworth/wiki_mini_embed | 2023-01-28T23:40:40.000Z | [
"region:us"
] | gfhayworth | null | null | null | 0 | 13 | Simple English Wikipedia it has only about 170k articles. We split these articles into paragraphs. wikipedia_filepath = 'simplewiki-2020-11-01.jsonl.gz'
if not os.path.exists(wikipedia_filepath): util.http_get('http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz', wikipedia_filepath)
embedded into vectors using SentenceTransformer('multi-qa-MiniLM-L6-cos-v1') |
Malisha/funsd | 2023-01-29T06:00:41.000Z | [
"region:us"
] | Malisha | null | null | null | 0 | 13 | Entry not found |
hugfaceguy0001/stanford_plato | 2023-02-10T14:03:54.000Z | [
"region:us"
] | hugfaceguy0001 | null | null | null | 3 | 13 | ---
dataset_info:
features:
- name: shorturl
dtype: string
- name: title
dtype: string
- name: pubinfo
dtype: string
- name: preamble
sequence: string
- name: toc
list:
- name: content_title
dtype: string
- name: sub_toc
sequence: string
- name: main_text
list:
- name: main_content
sequence: string
- name: section_title
dtype: string
- name: subsections
list:
- name: content
sequence: string
- name: subsection_title
dtype: string
- name: bibliography
sequence: string
- name: related_entries
list:
- name: href
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 160405734
num_examples: 1776
download_size: 90000475
dataset_size: 160405734
---
# Dataset Card for "stanford_plato"
## Description
This is a collection of articles in the Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/index.html).
This dataset includes 1776 articles, each explaining one philosophy term/people/topic. It has 8 features:
- shorturl: The shorturl for the article. For example, the shorturl 'abduction' correspond to the page https://plato.stanford.edu/entries/abduction/
- title: The title of the article.
- pubinfo: The publication information.
- **preamble**: The preface text of the article. The data is a list, each item of the list is a paragraph of the data. I choose not to break the paragraph structure. Certainly, you can merge them by, for example, ''.join(data['preamble'])
- toc: Table of contents. Also represented as list. Each item is a dictionary, the 'content_title' is the main content title, and the 'sub_toc' is a list of subcontent titles.
- **main_text**: The main text of the article.
The data is also a list, each item represents a section of the article.
Each item is a dictionary, 'section_title' is the title of the section, 'main_content' is a list of paragraphs before subsections,
'subsections' is a list of subsections, each item is also a dictionary, has its own title 'subsection_title' and list of paragraphs 'content'.
- bibliography: list of bibliography.
- related_entries: list of entries related to the current entry.
## Copyright and license
See the information at the offical website: https://plato.stanford.edu/info.html#c
This is not an official release. May be deleted later if violates copyright. The responsibility of not abusing is on the user.
|
TobiTob/CityLearn | 2023-06-27T11:14:53.000Z | [
"region:us"
] | TobiTob | The dataset consists of tuples of (observations, actions, rewards, dones) sampled by agents
interacting with the CityLearn 2022 Phase 1 environment (only first 5 buildings) | null | null | 1 | 13 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset CityLearn
This dataset is used to train a decision Transformer for the CityLearn 2022 environment https://www.aicrowd.com/challenges/neurips-2022-citylearn-challenge.
You can load data from this dataset via:
datasets.load_dataset('TobiTob/CityLearn', 'data_name')
A short description of all data sets can be found in file CityLearn.py |
ecoue/nordmann2023 | 2023-02-21T23:11:15.000Z | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"language:de",
"language:en",
"license:unknown",
"europarl",
"newscommentary",
"wikititles",
"ecb",
"rapid",
"eesc",
"ema",
"europat",
"books",
"ted2020",
"qed",
"eubookshop",
"doi:10.57967/... | ecoue | null | null | null | 1 | 13 | ---
annotations_creators: []
language:
- de
- en
language_creators: []
license:
- unknown
multilinguality:
- translation
pretty_name: nordmann2023
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- europarl
- newscommentary
- wikititles
- ecb
- rapid
- eesc
- ema
- europat
- books
- ted2020
- qed
- eubookshop
task_categories:
- translation
task_ids: []
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- de
- en
config_name: balanced
splits:
- name: train
num_bytes: 1539472445
num_examples: 5656659
- name: validation
num_bytes: 706611
num_examples: 2754
- name: test
num_bytes: 411077
num_examples: 1831
download_size: 4076594396
dataset_size: 1540590133
---
|
vietgpt/opus100_envi | 2023-07-03T17:56:58.000Z | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:vi",
"LM",
"region:us"
] | vietgpt | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: test
num_bytes: 192744
num_examples: 2000
- name: train
num_bytes: 82614470
num_examples: 1000000
- name: validation
num_bytes: 194721
num_examples: 2000
download_size: 59201490
dataset_size: 83001935
task_categories:
- translation
language:
- en
- vi
tags:
- LM
size_categories:
- 1M<n<10M
---
# Opus100
- Source: https://huggingface.co/datasets/opus100
- Num examples:
- 1,000,000 (train)
- 2,000 (validation)
- 192,744 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/opus100_envi")
```
- Format for Translation task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>",
en2vi=True,
):
if en2vi:
if random.random() < 0.5:
instruction = "Translate the following sentences from English into Vietnamese."
else:
instruction = "Dịch các câu sau từ tiếng Anh sang tiếng Việt."
input = sample['en'].strip()
response = sample['vi'].strip()
else:
if random.random() < 0.5:
instruction = "Translate the following sentences from Vietnamese into English."
else:
instruction = "Dịch các câu sau từ tiếng Việt sang tiếng Anh."
input = sample['vi'].strip()
response = sample['en'].strip()
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Dịch các câu sau từ tiếng Anh sang tiếng Việt.
Input:
Toast falls jelly-side down, children hit tables and people get hurt.
<|endofprompt|>
Bánh mì nướng rơi đông lại, trẻ con va vào bàn và con người bị thương.
<|endoftext|>
"""
``` |
undertheseanlp/UTS_WTK | 2023-07-26T14:09:20.000Z | [
"task_categories:token-classification",
"language:vi",
"license:apache-2.0",
"region:us"
] | undertheseanlp | UTS_WTK | \ | null | 0 | 13 | ---
license: apache-2.0
language: vi
task_categories:
- token-classification
---
|
Ddream-ai/InsuranceCorpus | 2023-03-04T02:07:47.000Z | [
"license:mit",
"region:us"
] | Ddream-ai | null | null | null | 4 | 13 | ---
license: mit
dataset_info:
features:
- name: 咨询
dtype: string
- name: 回复
dtype: string
splits:
- name: train
num_bytes: 3612350
num_examples: 3599
- name: validation
num_bytes: 186138
num_examples: 189
download_size: 2267366
dataset_size: 3798488
---
|
bbaaaa/iwslt14-de-en | 2023-04-04T02:05:40.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:de",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bbaaaa | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | null | 0 | 13 | ---
annotations_creators:
- crowdsourced
language:
- de
- en
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2014
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2014
---
# Dataset Card for IWSLT 2014
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 171721
- name: test
num_examples: 4698
- name: validation
num_examples: 887
|
larrylawl/multilexnorm | 2023-05-05T08:17:00.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"language:da",
"language:de",
"language:es",
"language:hr",
"language:it",
"language:nl",
"language:sl",
"language:sr",
"language:tr",
"language:id",
"license:cc-by-4.0",
"region:us"
] | larrylawl | For this task, participants are asked to develop a system that performs lexical normalization: the conversion of non-canonical texts to their canonical equivalent form. In particular, this task includes data from 12 languages. | null | null | 0 | 13 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
- da
- de
- es
- hr
- it
- nl
- sl
- sr
- tr
- id
size_categories:
- 100K<n<1M
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://noisy-text.github.io/2021/multi-lexnorm.html]()
- **Paper:** [https://aclanthology.org/2021.wnut-1.55/]()
### Dataset Summary
This is the huggingface version of the MultiLexnorm dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
### Citation Information
```
@inproceedings{van-der-goot-etal-2021-multilexnorm,
title = "{M}ulti{L}ex{N}orm: A Shared Task on Multilingual Lexical Normalization",
author = {van der Goot, Rob and
Ramponi, Alan and
Zubiaga, Arkaitz and
Plank, Barbara and
Muller, Benjamin and
San Vicente Roncal, I{\~n}aki and
Ljube{\v{s}}i{\'c}, Nikola and
{\c{C}}etino{\u{g}}lu, {\"O}zlem and
Mahendra, Rahmad and
{\c{C}}olako{\u{g}}lu, Talha and
Baldwin, Timothy and
Caselli, Tommaso and
Sidorenko, Wladimir},
booktitle = "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wnut-1.55",
doi = "10.18653/v1/2021.wnut-1.55",
pages = "493--509",
abstract = "Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MultiLexNorm shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 13 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-of-speech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system.",
}
```
### Contributions
Thanks to [@larrylawl](https://github.com/larrylawl) for adding this dataset.
|
intfloat/query2doc_msmarco | 2023-03-30T02:44:59.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2303.07678",
"region:us"
] | intfloat | This dataset contains GPT-3.5 (text-davinci-003) generations from MS-MARCO queries. | @inproceedings{Wang2023Query2docQE,
title={Query2doc: Query Expansion with Large Language Models},
author={Liang Wang and Nan Yang and Furu Wei},
year={2023}
} | null | 3 | 13 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 100K<n<1M
---
### Dataset Summary
This dataset contains GPT-3.5 (`text-davinci-003`) generations from MS-MARCO queries.
[Query2doc: Query Expansion with Large Language Models](https://arxiv.org/pdf/2303.07678.pdf) Liang Wang, Nan Yang and Furu Wei
### Data Instances
An example looks as follows.
```
{
"query_id": "1030303",
"query": "who is aziz hashim",
"pseudo_doc": "Aziz Hashim is a renowned entrepreneur, business leader, and one of the most successful restaurant franchise operators in the US. He is the founder of NRD Capital, a private equity firm focused on investments in multi-unit restaurant franchised businesses. Hashim has built a formidable track record of success in the franchise industry, with brands such as Outback Steakhouse and Jamba Juice. His accomplishments and philanthropic initiatives have earned him numerous awards, including the prestigious Ernst and Young Entrepreneur of the Year award."
}
```
### Data Fields
- `query_id`: a `string` feature.
- `query`: a `string` feature.
- `pseudo_doc`: a `string` feature.
### Data Splits
| train | dev | test | trec_dl2019 | trec_dl2020 |
|--------|------:|------:|------:|------:|
| 502939 | 6980 | 6837 | 43 | 54 |
### How to use this dataset
```python
from datasets import load_dataset
dataset = load_dataset('intfloat/query2doc_msmarco')
print(dataset['trec_dl2019'][0])
```
### Reproducing our results
We provide a python script [repro_bm25.py](https://huggingface.co/datasets/intfloat/query2doc_msmarco/blob/main/repro_bm25.py) to reproduce our results with BM25 retrieval.
First install some python dependency packages:
```
pip install pyserini==0.15.0 pytrec_eval datasets tqdm
```
Then download and run the python code:
```
python repro_bm25.py
```
This script utilizes the pre-built Lucene index from [Pyserini](https://github.com/castorini/pyserini/blob/pyserini-0.15.0/docs/prebuilt-indexes.md)
and might yield slightly different results compared to the paper.
### Citation Information
```
@article{wang2023query2doc,
title={Query2doc: Query Expansion with Large Language Models},
author={Wang, Liang and Yang, Nan and Wei, Furu},
journal={arXiv preprint arXiv:2303.07678},
year={2023}
}
```
|
katarinagresova/Genomic_Benchmarks_dummy_mouse_enhancers_ensembl | 2023-03-13T19:33:25.000Z | [
"region:us"
] | katarinagresova | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2273646
num_examples: 968
- name: test
num_bytes: 608062
num_examples: 242
download_size: 294310
dataset_size: 2881708
---
# Dataset Card for "Genomic_Benchmarks_dummy_mouse_enhancers_ensembl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
semeru/code-code-CodeCompletion-TokenLevel-Python | 2023-03-24T14:10:30.000Z | [
"license:mit",
"region:us"
] | semeru | null | null | null | 0 | 13 | ---
license: mit
Programminglanguage: "python"
version: "python3"
Date: "From paper [Probabilistic for Code with Decision trees](https://files.sri.inf.ethz.ch/website/papers/oopsla16-dt.pdf)(2016- paper release date)"
Contaminated: "Very Likely"
Size: "Standard Tokenizer (TreeSitter)"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/CodeCompletion-token/dataset/py150 in Semeru
# CodeXGLUE -- Code Completion (token level)
**Update 2021.07.30:** We update the code completion dataset with literals normalized to avoid sensitive information.
Here is the introduction and pipeline for token level code completion task.
## Task Definition
Predict next code token given context of previous tokens. Models are evaluated by token level accuracy.
Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
## Dataset
The dataset is in python.
### Dependency
- python 3.7
### Github Java Corpus
We use java corpus dataset mined by Allamanis and Sutton, in their MSR 2013 paper [Mining Source Code Repositories at Massive Scale using Language Modeling](https://homepages.inf.ed.ac.uk/csutton/publications/msr2013.pdf). We follow the same split and preprocessing in Karampatsis's ICSE 2020 paper [Big Code != Big Vocabulary: Open-Vocabulary Models for Source Code](http://homepages.inf.ed.ac.uk/s1467463/documents/icse20-main-1325.pdf).
### Data Format
Code corpus are saved in txt format files. one line is a tokenized code snippets:
```
<s> from __future__ import unicode_literals <EOL> from django . db import models , migrations <EOL> class Migration ( migrations . Migration ) : <EOL> dependencies = [ <EOL> ] <EOL> operations = [ <EOL> migrations . CreateModel ( <EOL> name = '<STR_LIT>' , <EOL> fields = [ <EOL> ( '<STR_LIT:id>' , models . AutoField ( verbose_name = '<STR_LIT>' , serialize = False , auto_created = True , primary_key = True ) ) , <EOL> ( '<STR_LIT:name>' , models . CharField ( help_text = b'<STR_LIT>' , max_length = <NUM_LIT> ) ) , <EOL> ( '<STR_LIT:image>' , models . ImageField ( help_text = b'<STR_LIT>' , null = True , upload_to = b'<STR_LIT>' , blank = True ) ) , <EOL> ] , <EOL> options = { <EOL> '<STR_LIT>' : ( '<STR_LIT:name>' , ) , <EOL> '<STR_LIT>' : '<STR_LIT>' , <EOL> } , <EOL> bases = ( models . Model , ) , <EOL> ) , <EOL> ] </s>
```
### Data Statistics
Data statistics of py150 dataset are shown in the below table, note that there doesn't exist dev set in the origin py150 dataset, we select 5,000 files in the original train set as dev set.
| Data Split | #Files | #Tokens |
| ----------- | :---------: | :---------: |
| Train | 95,000 | 72.1M |
| Dev | 5,000 | 4.4M |
| Test | 50,000 | 37.3M |
|
LinhDuong/chatdoctor-5k | 2023-03-28T07:32:21.000Z | [
"license:apache-2.0",
"arxiv:2303.14070",
"region:us"
] | LinhDuong | null | null | null | 0 | 13 | ---
license: apache-2.0
---
This ChatDoctor-5K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing |
mstz/balloons | 2023-04-15T11:16:18.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"balloons",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_balloons_13,
title = {{Balloons}},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5BP4D}}
} | null | 0 | 13 | ---
language:
- en
tags:
- balloons
- tabular_classification
- binary_classification
- UCI
pretty_name: Balloons
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- adult_or_stretch
- adult_and_stretch
- yellow_and_small
- yellow_and_small_or_adult_and_stretch
license: cc
---
# Balloons
The [Balloons dataset](https://archive.ics.uci.edu/ml/datasets/Balloons) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict if the given balloon is inflated.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|--------------------------------------------|---------------------------|--------------------------------------------------------------------------------------------------|
| adult_or_stretch | Binary classification | Balloons are inflated if age == adult or act == stretch. |
| adult_and_stretch | Binary classification | Balloons are inflated if age == adult and act == stretch. |
| yellow_and_small | Binary classification | Balloons are inflated if color == yellow and size == small. |
| yellow_and_small_or_adult_and_stretch | Binary classification | Balloons are inflated if color == yellow and size == small or age == adult and act == stretch. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/balloons", "adult_or_stretch")["train"]
```
# Features
|**Feature** |**Type** | **Description** |
|-------------------|-----------|-------------------|
|`color` |`[string]` | Balloon's color. |
|`size` |`[string]` | Balloon's size. |
|`act` |`[string]` | Balloon's state. |
|`age` |`[string]` | Balloon's age. |
|`is_inflated` | `[int8]`| The inflation status of the baloon.| |
mstz/congress | 2023-04-16T17:01:56.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"congress",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_congressional_voting_records_105,
title = {{Congressional Voting Records}},
year = {1987},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5C01P}}
} | null | 0 | 13 | ---
language:
- en
tags:
- congress
- tabular_classification
- binary_classification
- UCI
pretty_name: Congress
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- voting
license: cc
---
# Congress
The [Congress dataset](https://archive.ics.uci.edu/ml/datasets/Congress) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Congressmen of two different parties vote on a series of bills. Guess the party of each voter on the basis of their votes.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| voting | Binary classification | What's the party of the voter? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/congress", "voting")["train"]
``` |
mstz/hayes_roth | 2023-04-16T17:30:45.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"hayes",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_hayes_efficiency_242,
author = {Tsanas,Athanasios & Xifara,Angeliki},
title = {{Hayes efficiency}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C51307}}
} | null | 0 | 13 | ---
language:
- en
tags:
- hayes
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Hayes evaluation
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- hayes
- hayes_1
- hayes_2
- hayes_3
license: cc
---
# Hayes
The [Hayes-Roth dataset](https://archive-beta.ics.uci.edu/dataset/44/hayes+roth) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------------|
| hayes | Multiclass classification | Classify hayes type. |
| hayes_1 | Binary classification | Is this instance of class 1? |
| hayes_2 | Binary classification | Is this instance of class 2? |
| hayes_3 | Binary classification | Is this instance of class 3? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/hayes", "hayes")["train"]
``` |
mstz/splice | 2023-04-16T18:03:01.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"splice",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_molecular_biology_(splice-junction_gene_sequences)_69,
title = {{Molecular Biology (Splice-junction Gene Sequences)}},
year = {1992},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5M888}}
} | null | 0 | 13 | ---
language:
- en
tags:
- splice
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Splice
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- splice
- splice_EI
- splice_IE
- splice_N
license: cc
---
# Splice
The [Splice dataset](https://archive-beta.ics.uci.edu/dataset/69/molecular+biology+splice+junction+gene+sequences) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| splice | Multiclass classification |
| splice_EI | Binary classification |
| splice_IE | Binary classification |
| splice_N | Binary classification | |
mstz/waveform_noise_v1 | 2023-04-16T18:04:18.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"waveformnoiseV1",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_waveform_database_generator_(version_1)_107,
author = {Breiman,L. & Stone,C.J.},
title = {{Waveform Database Generator (Version 1)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5CS3C}}
} | null | 0 | 13 | ---
language:
- en
tags:
- waveformnoiseV1
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: WaveformNoiseV1
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- waveformnoiseV1
- waveformnoiseV1_0
- waveformnoiseV1_1
- waveformnoiseV1_2
license: cc
---
# WaveformNoiseV1
The [WaveformNoiseV1 dataset](https://archive-beta.ics.uci.edu/dataset/107/waveform+database+generator+version+1) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| waveformnoiseV1 | Multiclass classification.| |
| waveformnoiseV1_0 | Binary classification. | Is the image of class 0? |
| waveformnoiseV1_1 | Binary classification. | Is the image of class 1? |
| waveformnoiseV1_2 | Binary classification. | Is the image of class 2? | |
sam-mosaic/dolly_chatml | 2023-07-18T00:23:37.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | null | 0 | 13 | ---
language: en
dataset_info:
features:
- name: text
dtype: string
- name: cat
dtype: string
splits:
- name: train
num_bytes: 11767434
num_examples: 8497
download_size: 5401759
dataset_size: 11767434
---
# Dataset Card for "dolly_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fschlatt/trump-tweets | 2023-04-19T11:41:59.000Z | [
"language:en",
"license:cc0-1.0",
"region:us"
] | fschlatt | null | null | null | 1 | 13 | ---
license: cc0-1.0
language:
- en
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: is_retweet
dtype: bool
- name: is_deleted
dtype: bool
- name: device
dtype: string
- name: favorites
dtype: int64
- name: retweets
dtype: int64
- name: datetime
dtype: timestamp[s]
- name: is_flagged
dtype: bool
splits:
- name: train
num_bytes: 10593265
num_examples: 56571
download_size: 0
dataset_size: 10593265
---
This is a clone of the Trump Tweet Kaggle dataset found here: https://www.kaggle.com/datasets/headsortails/trump-twitter-archive |
ioclab/animesfw | 2023-04-24T14:10:44.000Z | [
"region:us"
] | ioclab | null | null | null | 1 | 13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: tags
dtype: string
splits:
- name: train
num_bytes: 968422627084.875
num_examples: 3969879
download_size: 4471804726
dataset_size: 968422627084.875
---
# Dataset Card for "animesfw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitness/wikipedia_ar | 2023-05-16T19:46:04.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3034104285
num_examples: 1203217
download_size: 1286651772
dataset_size: 3034104285
---
# Dataset Card for "wikipedia_ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chat-bot-dls/user_feedback | 2023-07-16T13:37:12.000Z | [
"region:us"
] | chat-bot-dls | null | null | null | 0 | 13 | Entry not found |
0x22almostEvil/multilingual-wikihow-qa-16k | 2023-05-13T16:59:15.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"language:ru",
"language:pt",
"language:it",
"language:es",
"language:fr",
"language:de",
"language:nl",
"license:cc-by-nc-3.0",
"wikihow",
"QnA",
"region:us"
] | 0x22almostEvil | null | null | null | 7 | 13 | ---
license: cc-by-nc-3.0
task_categories:
- question-answering
language:
- en
- ru
- pt
- it
- es
- fr
- de
- nl
pretty_name: multilingual-wikihow-qa-16k
size_categories:
- 10K<n<100K
tags:
- wikihow
- QnA
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 144407512
num_examples: 16822
download_size: 76391535
dataset_size: 144407512
---
# Dataset Card for multilingual WikiHow with ~16.8K entries. ~(2-2.2)K for each language.
### Warning [1]
The WikiHow team contacted me and made it clear that **they forbid the use of their data for machine learning purposes**. However, I am not calling for anything, and this dataset only shows the concept, and I strongly advise against violating their ToS.
However, consultation with lawyers made it clear that **dataset can be used for such purposes** if the project has **research purposes**.
### Warning [2]
Source code is kinda **very** bad, and I'm lazy to fix it.
### Dataset Summary
Contains Parquet of a list of instructions and WikiHow articles on different languages.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE (*.wikihow.com)
* METADATA (json with url and language).
### Licensing Information
Data is from WikiHow, license for content is located here:
https://www.wikihow.com/wikiHow:Creative-Commons
### Acknowledgements
This helped me a lot!
https://github.com/HelloChatterbox/PyWikiHow; https://pypi.org/project/pywikihow/ |
lexlms/legal_lama | 2023-07-24T13:13:15.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-sa-4.0",
"... | lexlms | LegalLAMA: Legal LAnguage Model Analysis (LAMA) (LAMA) dataset. | @inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = july,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/xxx",
} | null | 6 | 13 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- masked-language-modeling
pretty_name: LegalLAMA
tags:
- legal
- law
---
# Dataset Card for "LegalLAMA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
LegalLAMA is a diverse probing benchmark suite comprising 8 sub-tasks that aims to assess the acquaintance of legal knowledge that PLMs acquired in pre-training.
### Dataset Specifications
| Corpus | Corpus alias | Examples | Avg. Tokens | Labels |
|--------------------------------------|----------------------|-----------|-------------|--------|
| Criminal Code Sections (Canada) | `canadian_sections` | 321 | 72 | 144 |
| Legal Terminology (EU) | `cjeu_term` | 2,127 | 164 | 23 |
| Contractual Section Titles (US) | `contract_sections` | 1,527 | 85 | 20 |
| Contract Types (US) | `contract_types` | 1,089 | 150 | 15 |
| ECHR Articles (CoE) | `ecthr_articles` | 5,072 | 69 | 13 |
| Legal Terminology (CoE) | `ecthr_terms` | 6,803 | 97 | 250 |
| Crime Charges (US) | `us_crimes` | 4,518 | 118 | 59 |
| Legal Terminology (US) | `us_terms` | 5,829 | 308 | 7 |
### Usage
Load a specific sub-corpus, given the corpus alias, as presented above.
```python
from datasets import load_dataset
dataset = load_dataset('lexlms/legal_lama', name='ecthr_terms')
```
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/)
```
@inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
pages = "15513--15535",
}
``` |
gonglinyuan/CoSQA | 2023-05-15T23:57:34.000Z | [
"license:mit",
"arxiv:2105.13239",
"region:us"
] | gonglinyuan | null | null | null | 0 | 13 | ---
license: mit
---
Downloaded from https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-WebQuery
For more details about the dataset collection and usage, please refer to the ACL 2021 paper (https://arxiv.org/abs/2105.13239) and the GitHub repo (https://github.com/Jun-jie-Huang/CoCLR). |
zeroshot/cybersecurity-corpus | 2023-05-16T13:09:40.000Z | [
"license:cc0-1.0",
"region:us"
] | zeroshot | null | null | null | 2 | 13 | ---
license: cc0-1.0
---
|
xmj2002/genshin_ch_10npc | 2023-06-02T07:30:27.000Z | [
"task_categories:text-to-speech",
"language:zh",
"license:apache-2.0",
"region:us"
] | xmj2002 | null | null | null | 8 | 13 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: npcName
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 2459515323.168046
num_examples: 17293
- name: test
num_bytes: 273358494.8319542
num_examples: 1922
download_size: 2154942775
dataset_size: 2732873818
license: apache-2.0
task_categories:
- text-to-speech
language:
- zh
---
# Dataset Card for "genshin_ch_10npc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ccmusic-database/piano_sound_quality | 2023-10-03T16:43:03.000Z | [
"task_categories:audio-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | Piano-Sound-Quality-Database is a dataset of piano sound.
It consists of 8 kinds of pianos including PearlRiver, YoungChang, Steinway-T, Hsinghai,
Kawai, Steinway, Kawai-G, Yamaha(recorded by Shaohua Ji with SONY PCM-D100).
Data was annotated by students from the China Conservatory of Music (CCMUSIC) in Beijing
and collected by George Chou. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | null | 2 | 13 | ---
license: mit
task_categories:
- audio-classification
language:
- en
tags:
- music
- art
pretty_name: Piano Sound Quality Database
size_categories:
- n<1K
---
# Dataset Card for Piano Sound Quality Database
## Requirements
```
python 3.8-3.10
soundfile
librosa
```
## Usage
```
from datasets import load_dataset
data = load_dataset("ccmusic-database/piano_sound_quality", split="train")
labels = data.features['label'].names
for item in data:
print('audio info: ', item['audio'])
print('label name: ' + labels[item['label']])
```
## Maintenance
```
git clone git@hf.co:datasets/ccmusic-database/piano_sound_quality
```
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/piano_sound_quality>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 12 full-range audio files (.wav/.mp3/.m4a format) of 7 models of piano (KAWAI upright piano, KAWAI grand piano, Yingchang upright piano, Xinghai upright piano, Grand Theatre Steinway piano, Steinway grand piano, Pearl River upright piano) and 1320 split monophonic audio files (. wav/.mp3/.m4a format), for a total of 1332 files.
A score sheet (.xls format) of the piano sound quality rated by 29 people who participated in the subjective evaluation test is also included.
### Supported Tasks and Leaderboards
Piano Sound Classification, pitch detection
### Languages
English
## Dataset Structure
### Data Instances
.wav
### Data Fields
```
1_PearlRiver
2_YoungChang
3_Steinway-T
4_Hsinghai
5_Kawai
6_Steinway
7_Kawai-G
8_Yamaha
```
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for piano sound quality
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Shaohua Ji, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
This database contains 12 full-range audio files (.wav/.mp3/.m4a format) of 7 models of piano (KAWAI upright piano, KAWAI grand piano, Yingchang upright piano, Xinghai upright piano, Grand Theatre Steinway piano, Steinway grand piano, Pearl River upright piano)
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Help developing piano sound quality rating apps
### Discussion of Biases
Only for pianos
### Other Known Limitations
No black key in Steinway
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
[Monan Zhou, Shangda Wu, Shaohua Ji, Zijin Li, and Wei Li. A Holistic Evaluation of Piano Sound Quality[C]//Proceedings of the 6th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2023.](https://github.com/MuGeminorum/Piano-Classification)
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for piano sound quality |
Thaweewat/chain-of-thought-74k-th | 2023-05-26T12:32:46.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | null | 1 | 13 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
---
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on English 74K [Alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT) instruction dataset.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
Slep/LAION-RVS-Fashion | 2023-06-06T04:27:24.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"fashion",
"visual search",
"arxiv:2306.02928",
"region:us"
] | Slep | null | null | null | 5 | 13 | ---
license: mit
language:
- en
tags:
- fashion
- visual search
pretty_name: LAION — Referred Visual Search — Fashion
size_categories:
- 1M<n<10M
---
# **LAION — Referred Visual Search — Fashion**
*Introduced in **Weakly-Supervised Conditional Embedding for Referred Visual Search***
**[CRITEO AI Lab](https://ailab.criteo.com)** x **[ENPC](https://imagine-lab.enpc.fr)**
[Simon Lepage](https://simon-lepage.github.io), Jérémie Mary, [David Picard](https://davidpicard.github.io)
[[`Paper`](https://arxiv.org/abs/2306.02928)]
[[`Demo`](https://huggingface.co/spaces/Slep/CondViT-LRVSF-Demo)]
[[`Code`](https://github.com/Simon-Lepage/CondViT-LRVSF)]
[[`BibTeX`](#citing-the-dataset)]
---
## **Composition**
LAION-RVS-Fashion is composed of images from :
- **[LAION 2B EN](https://huggingface.co/datasets/laion/laion2B-en)**
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](https://arxiv.org/abs/2306.02928) for additional details.
|Split|Products|Distractors|
|-:|:-:|:-:|
|Train|272,457|-|
|Valid|400|99,541|
|Test|2,000|2,000,014|
**Total number of training images :** 841,718.
## **Samples**
<table style='text-align:center'>
<tbody>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Neck</td>
<td colspan=2>Lower Body</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>a scarf with multi-coloured stripes</td>
<td colspan=2>stella pants - dark suede</td>
</tr>
<tr></tr>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Feet</td>
<td colspan=2>Bags</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>neon green patent leather heels with studs</td>
<td colspan=2>the burberry small leather bag is brown and leather</td>
</tr>
</tbody>
</table>
## **Attributes**
- **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository.
- **TEXT**: Text originally associated with the image.
- **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN.
- **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
- **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product.
- **INDEX_SRC**: ID of parquet file originally storing this image.
- **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors.
- **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions.
We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248).
---
## Citing the dataset
To cite our work, please use the following BibTeX entry :
```
@article{lepage2023condvit,
title={Weakly-Supervised Conditional Embedding for Referred Visual Search},
author={Lepage, Simon and Mary, Jérémie and Picard, David},
journal={arXiv:2306.02928},
year={2023}
}
``` |
CIRAL/ciral | 2023-08-21T15:49:42.000Z | [
"task_categories:text-retrieval",
"language:ha",
"language:so",
"language:sw",
"language:yo",
"license:apache-2.0",
"region:us"
] | CIRAL | This dataset consists of the queries and relevance judgements in the CIRAL test collection. | null | null | 1 | 13 | ---
license: apache-2.0
language:
- ha
- so
- sw
- yo
task_categories:
- text-retrieval
mutilinguality:
- multilingual
viewer: true
---
# Dataset Summary
CIRAL is a collection for cross-lingual information retrieval research across four (4) African languages. The collection comprises English queries and query-passage relevance judgements for passages in the African languages.
This dataset repo contains only the queries and relevance judgements. The corpus collection can be found here [here](https://huggingface.co/datasets/CIRAL/ciral-corpus)
# Dataset Structure
1. To download the files: The queries can be found under `ciral-{lang}/topics` and are in `.tsv` formats with each line in the form:
```
qid\tquery
```
while the judgements are in the folder `ciral-{lang}/qrels`, with each file in the standard TREC format:
```
qid Q0 docid relevance
```
2. To access the dataset via `datasets`:
```
ciral_dataset = load_dataset("ciral/ciral", "hausa") #or swahili, somali, yoruba
for data in ciral_data['train']: # or 'test'
query_id = data['query_id']
query = data['query']
pos_qrels = data['positive_passages']
neg_qrels = data['negative_passages']
for qrel in pos_qrels:
docid = qrel['docid']
text = qrel['text']
```
## Citation
...
|
GautamR/grievance_agri | 2023-09-12T13:25:57.000Z | [
"license:apache-2.0",
"region:us"
] | GautamR | null | null | null | 0 | 13 | ---
license: apache-2.0
---
|
yyu/nyt-attrprompt | 2023-09-13T20:55:46.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2306.15895",
"region:us"
] | yyu | null | null | null | 0 | 13 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: d
size_categories:
- 10K<n<100K
---
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
Checkout the paper https://arxiv.org/abs/2306.15895 for details.
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
Please check our original paper for details. Moreover, we provide the generated dataset using LLM as follows:
- `regen.jsonl`: The training data generated by [ReGen](https://github.com/yueyu1030/ReGen).
- `regen_llm_augmented.jsonl`: The training data generated by ReGen, with the subtopics generated by the LLM.
- `progen.jsonl`: The training data generated by [ProGen](https://github.com/hkunlp/progen).
Please cite the original paper if you use this dataset for your study. Thanks!
```
@inproceedings{meng2019weakly,
title={Weakly-supervised hierarchical text classification},
author={Meng, Yu and Shen, Jiaming and Zhang, Chao and Han, Jiawei},
booktitle={Proceedings of the AAAI conference on artificial intelligence},
pages={6826--6833},
year={2019}
}
@article{yu2023large,
title={Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias},
author={Yu, Yue and Zhuang, Yuchen and Zhang, Jieyu and Meng, Yu and Ratner, Alexander and Krishna, Ranjay and Shen, Jiaming and Zhang, Chao},
journal={arXiv preprint arXiv:2306.15895},
year={2023}
}
``` |
divergente/wikitext-ptbr-1 | 2023-06-15T01:24:23.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:pt",
"license:cc-by-sa-3.0",
"license:gfdl",
"region... | divergente | null | null | null | 0 | 13 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- pt
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
pretty_name: WikiTextPtBr
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
- config_name: wikitext-ptbr
features:
- name: text
dtype: string
--- |
open-llm-leaderboard/requests | 2023-10-11T00:05:22.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 2 | 13 | # Copy of the h4 queue repo
Contains info for launching a model on the cluster to be evaluated with ligtheval |
Confirm-Labs/pile_top_bigrams | 2023-06-25T02:57:32.000Z | [
"region:us"
] | Confirm-Labs | null | null | null | 0 | 13 |
# top_bigrams
See https://confirmlabs.org/posts/catalog.html for details.
- `id0`: the first token in the bigram
- `id1`: the most common token following `id0` in The Pile
- `sum_count`: the number of times that `id0` appears in The Pile.
- `max_count`: the number of times that `id1` appears after `id0` in The Pile.
- `frac_max`: `max_count / sum_count`
- `token0`: the string representation of `id0`
- `token1`: the string representation of `id1`
- `seq`: the string representation of the bigram, `token0 token1`
- `p_{model_size}`: the probability of the bigram under Pythia-{model_size} when prompted with `id0`.
|
KaiLv/UDR_COPA | 2023-06-21T12:15:53.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: premise
dtype: string
- name: question
dtype: string
- name: mirrored
dtype: bool
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 110350
num_examples: 500
- name: test
num_bytes: 107164
num_examples: 500
download_size: 129892
dataset_size: 217514
---
# Dataset Card for "UDR_COPA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_CommonGen | 2023-06-21T12:34:47.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: concept_set_id
dtype: int32
- name: concepts
list: string
- name: target
dtype: string
- name: references
list: string
- name: joined_concepts
dtype: string
splits:
- name: train
num_bytes: 12780999
num_examples: 67389
- name: validation
num_bytes: 440794
num_examples: 993
- name: test
num_bytes: 214190
num_examples: 1497
- name: train_dedup
num_bytes: 6018136
num_examples: 32651
download_size: 8248320
dataset_size: 19454119
---
# Dataset Card for "UDR_CommonGen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanceFocus/flare-fpb | 2023-08-05T00:15:42.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 1520799
num_examples: 3100
- name: valid
num_bytes: 381025
num_examples: 776
- name: test
num_bytes: 475173
num_examples: 970
download_size: 0
dataset_size: 2376997
---
# Dataset Card for "flare-fpb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ClimatePolicyRadar/global-stocktake-documents | 2023-09-14T14:19:55.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:cc",
"climate",
"policy",
"legal",
"doi:10.57967/hf/1112",
"region:us"
] | ClimatePolicyRadar | null | null | null | 4 | 13 | ---
language:
- en
tags:
- climate
- policy
- legal
size_categories:
- 1M<n<10M
license: cc
---
# Global Stocktake Open Data
This repo contains the data for the first [UNFCCC Global Stocktake](https://unfccc.int/topics/global-stocktake). The data consists of document metadata from sources relevant to the Global Stocktake process, as well as full text parsed from the majority of the documents.
The files in this dataset are as follows:
- `metadata.csv`: a CSV containing document metadata for each document we have collected. **This metadata may not be the same as what's stored in the source databases** – we have cleaned and added metadata where it's corrupted or missing.
- `full_text.parquet`: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
A research tool you can use to view this data and the results of some classifiers run on it is at [gst1.org](https://gst1.org).
This data is licensed according to CC BY 4.0, which is a license that represents the terms at the source repositories.
**Contents**
- [Sources and data completeness](#sources-and-data-completeness)
- [Field descriptions](#field-descriptions)
- [Known issues](#known-issues)
- [Usage in Python](#usage-in-python)
- [Loading metadata CSV](#loading-metadata-csv)
- [Loading text block data](#loading-text-block-data)
---
## Sources and data completeness
This dataset contains documents from the following sources:
* [Global Stocktake Information Portal](https://unfccc.int/topics/global-stocktake/information-portal)
* [NDC Registry](https://unfccc.int/NDCREG)
* [Adaptation Communications Registry](https://unfccc.int/ACR)
* [Fast-Start Finance Country Reports](https://unfccc.int/climatefinance?submissions)
* [IPCC Reports](https://www.ipcc.ch/reports/)
The following Global Stocktake relevant data sources are not yet in this dataset:
* [National Adaptation Plan Central Portal](https://napcentral.org/submitted-naps)
* [TNA Country Reports](https://unfccc.int/ttclear/tna/reports.html)
### Data completeness
The last refresh of the data was on **2023-09-13**.
We currently only parse text out of PDFs. Any non-PDF file will only be referenced in `metadata.csv`, and not be referenced in `full_text.parquet`.
We have yet to process approximately 150 documents of the 1700 documents due to formatting issues. We are working on resolving this issue as soon as possible. [See the document list here](https://labs.climatepolicyradar.org/global-stocktake/UNPROCESSED_DOCUMENTS.html).
## Data model
This dataset contains individual documents that are grouped into 'document families'.
The way to think of is as follows:
* Each row in the dataset is a physical document. A physical document is a single document, in any format.
* All physical documents belong to document families. A document family is one or more physical documents, centred around a main document, which jointly contain all relevant information about the main document. For example, where a document has a translation, amendments or annexes, those files are stored together as a family.
### Getting unique text blocks
> TODO
## Field descriptions
- `author`: document author (str)
- `author_is_party`: whether the author is a Party (national government) or not (bool)
- `block_index`: the index of a text block in a document. Starts from 0 (int)
- `coords`: coordinates of the text block on the page
- `date`: publication date of the document
- `document_content_type`: file type. We have only parsed text from PDFs.
- `document_id`: unique identifier for a document
- `document_family_id`: see *data model* section above
- `document_family_slug`: see *data model* section above
- `document_md5_sum`: md5sum of the document's content
- `document_name`: document title
- `document_source_url`: URL for document
- `document_variant`: used to identify translations. In `[nan, 'Translation', 'Original Language']`
- `has_valid_text`: our heuristic about whether text is valid or not in the document based on the parser
- `language`: language of the text block. Either `en` or `nan` - see known issues
- `page_number`: page number of text block (0-indexed)
- `text`: text in text block
- `text_block_id`: identifier for a text block which is unique per document
- `translated`: whether we have machine-translated the document to English. Where we have translated documents, both the original and translated exist.
- `type`: type of text block. In `["Text", "Title", "List", "Table", "Figure","Ambiguous"]`
- `type_confidence`: confidence from that the text block is of the labelled type
- `types`: list of document types e.g. Nationally Determined Contribution, National Adaptation Plan (list[str])
- `version`: in `['MAIN', 'ANNEX', 'SUMMARY', 'AMENDMENT', 'SUPPORTING DOCUMENTATION', 'PREVIOUS VERSION']`
## Known issues
* Author names are sometimes corrupted
* Text block languages are sometimes missing or marked as `nan`
## Usage in Python
The easiest way to access this data via the terminal is to run `git clone <this-url>`.
### Loading metadata CSV
``` py
metadata = pd.read_csv("metadata.csv")
```
### Loading text block data
Once loaded into a Huggingface Dataset or Pandas DataFrame object the parquet file can be converted to other formats, e.g. Excel, CSV or JSON.
``` py
# Using huggingface (easiest)
dataset = load_dataset("ClimatePolicyRadar/global-stocktake-documents")
# Using pandas
text_blocks = pd.read_parquet("full_text.parquet")
``` |
Fsoft-AIC/the-vault-inline | 2023-08-22T10:01:46.000Z | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | Fsoft-AIC | The Vault is a multilingual code-text dataset with over 34 million pairs covering 10 popular programming languages.
It is the largest corpus containing parallel code-text data. By building upon The Stack, a massive raw code sample collection,
the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a
high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level.
The Vault can serve many purposes at multiple levels. | @article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
} | null | 2 | 13 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
## Dataset Structure
### Data Instances
```
{
"hexsha": "ee1cf38808d3db0ea364b049509a01a65e6e5589",
"repo": "Waguy02/Boomer-Scripted",
"path": "python/subprojects/testbed/mlrl/testbed/persistence.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "__init__",
"code": "def __init__(self, model_dir: str):\n \"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"\n self.model_dir = model_dir",
"code_tokens": [
"def",
"__init__",
"(",
"self",
",",
"model_dir",
":",
"str",
")",
":",
"\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"self",
".",
"model_dir",
"=",
"model_dir"
],
"original_comment": "\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"comment": ":param model_dir: The path of the directory where models should be saved",
"comment_tokens": [
":",
"param",
"model_dir",
":",
"The",
"path",
"of",
"the",
"directory",
"where",
"models",
"should",
"be",
"saved"
],
"start_point": [
1,
8
],
"end_point": [
3,
11
],
"prev_context": {
"code": null,
"start_point": null,
"end_point": null
},
"next_context": {
"code": "self.model_dir = model_dir",
"start_point": [
4,
8
],
"end_point": [
4,
34
]
}
}
```
### Data Fields
Data fields for inline level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **original_comment** (string): original text of comment ,
- **comment** (string): clean version of comment,
- **comment_tokens** (list): tokenized version of `comment`,
- **start_point** (int): start position of `original_comment` in `code`,
- **end_point** (int): end position of `original_comment` in `code`,
- **prev_context** (dict): block of code before `original_comment`,
- **next_context** (dict): block of code after `original_comment`
### Data Splits
In this repo, the inline level data is not split, and contained in only train set.
## Dataset Statistics
| Languages | Number of inline comments |
|:-----------|---------------------------:|
|Python | 14,013,238 |
|Java | 17,062,277 |
|JavaScript | 1,438,110 |
|PHP | 5,873,744 |
|C | 6,778,239 |
|C# | 6,274,389 |
|C++ | 10,343,650 |
|Go | 4,390,342 |
|Ruby | 767,563 |
|Rust | 2,063,784 |
|TOTAL | **69,005,336** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full inline level dataset (69M samples)
dataset = load_dataset("Fsoft-AIC/the-vault-inline")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-inline", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-inline", streaming= True)
for sample in iter(data['train']):
print(sample)
```
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). |
jjzha/green | 2023-09-07T12:14:02.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | jjzha | null | null | null | 0 | 13 | ---
license: cc-by-4.0
language: en
---
This is the skill dataset created by:
```
@inproceedings{green-etal-2022-development,
title = "Development of a Benchmark Corpus to Support Entity Recognition in Job Descriptions",
author = "Green, Thomas and
Maynard, Diana and
Lin, Chenghua",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.128",
pages = "1201--1208",
}
```
There are no document delimiters, task is on the sentence-level.
Number of samples (sentences):
- train: 8669
- dev: 964
- test: 335
Sources:
- TotalJobs (UK): https://www.kaggle.com/datasets/airiddha/trainrev1
Type of tags:
- Generic BIO tags with key `tags_skill`
- Finer grained labels of BIO tags are
- `SKILL`: Tasks that can be performed, or attributes and abilities (including soft skills) that enable people to perform tasks.
- `QUALIFICATION`: Official certifications obtained through taking a course or passing an exam or appraisal.
- `EXPERIENCE`: Lengths of time relating to a position or skill.
- `OCCUPATION`: Job titles, including abbreviations and acronyms.
- `DOMAIN`: Areas of industry in which someone might have knowledge or experience.
- Also has part-of-speech tags, indicated by `pos`.
Sample:
```
{
"idx": 959,
"tokens": ["negotiating", "and", "commercial", "skills", "Conscientious", "and", "thorough", "by", "nature"],
"tags_skill": ["B-SKILL", "I-SKILL", "I-SKILL", "I-SKILL", "I-SKILL", "O", "B-SKILL", "O", "O"],
"pos": ["NN", "CC", "JJ", "NNS", "JJ", "CC", "JJ", "IN", "NN"]
}
``` |
masakhane/afriqa-gold-passages | 2023-07-08T04:15:40.000Z | [
"task_categories:question-answering",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:bem",
"language:fon",
"language:ha",
"language:ig",
"language:kin",
"language:sw",
"language:wo",
"language:yo",
"language:zu",
"language:tw",
"license:cc-by-sa-4.0",
"cross-ling... | masakhane | AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
AfriQA is the first cross-lingual question-answering (QA) dataset with a focus on African languages.
The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology. | \ | null | 1 | 13 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- bem
- fon
- ha
- ig
- kin
- sw
- wo
- yo
- zu
- tw
pretty_name: AfriQA
size_categories:
- 10K<n<100K
multilinguality:
- multilingual
tags:
- cross-lingual
- question-answering
- qa
---
# Dataset Card for AfriQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/afriqa)
- **Repository:** [github](https://github.com/masakhane-io/afriqa)
- **Paper:** [paper]()
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or oogundep@uwaterloo.ca
### Dataset Summary
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages. The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology.
The train/validation/test sets are available for all the 10 languages.
### Supported Tasks and Leaderboards
- `question-answering`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better) and [Exact Match Accuracy](https://huggingface.co/spaces/evaluate-metric/exact_match).
### Languages
There are 20 languages available :
- Bemba (bem)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Swahili (swą)
- Twi (twi)
- Wolof (wol)
- Yorùbá (yor)
- Zulu (zul)
## Dataset Structure
### Data Instances
- Data Format:
- id : Question ID
- question : Question in African Language
- translated_question : Question translated into a pivot language (English/French)
- answers : Answer in African Language
- lang : Datapoint Language (African Language) e.g `bem`
- split : Dataset Split
- translated_answer : Answer in Pivot Language
- translation_type : Translation type of question and answers
```bash
{ "id": 0,
"question": "Bushe icaalo ca Egypt caali tekwapo ne caalo cimbi?",
"translated_question": "Has the country of Egypt been colonized before?",
"answers": "['Emukwai']",
"lang": "bem",
"split": "dev",
"translated_answer": "['yes']",
"translation_type": "human_translation"
}
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | dev | test |
|-----------------|------:|-----------:|-----:|
| Bemba | 502 | 503 | 314 |
| Fon | 427 | 428 | 386 |
| Hausa | 435 | 436 | 300 |
| Igbo | 417 | 418 | 409 |
| Kinyarwanda | 407 | 409 | 347 |
| Swahili | 415 | 417 | 302 |
| Twi | 451 | 452 | 490 |
| Wolof | 503 | 504 | 334 |
| Yoruba | 360 | 361 | 332 |
| Zulu | 387 | 388 | 325 |
| <b>Total</b> | <b>4333</b> | <b>4346</b> |<b>3560</b> |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce question-answering resources to 10 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
...
#### Initial Data Collection and Normalization
...
#### Who are the source language producers?
...
### Annotations
#### Annotation process
Details can be found here ...
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
...
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@misc{ogundepo2023afriqa,
title={AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages},
author={Odunayo Ogundepo and Tajuddeen R. Gwadabe and Clara E. Rivera and Jonathan H. Clark and Sebastian Ruder and David Ifeoluwa Adelani and Bonaventure F. P. Dossou and Abdou Aziz DIOP and Claytone Sikasote and Gilles Hacheme and Happy Buzaaba and Ignatius Ezeani and Rooweither Mabuya and Salomey Osei and Chris Emezue and Albert Njoroge Kahira and Shamsuddeen H. Muhammad and Akintunde Oladipo and Abraham Toluwase Owodunni and Atnafu Lambebo Tonja and Iyanuoluwa Shode and Akari Asai and Tunde Oluwaseyi Ajayi and Clemencia Siro and Steven Arthur and Mofetoluwa Adeyemi and Orevaoghene Ahia and Aremu Anuoluwapo and Oyinkansola Awosan and Chiamaka Chukwuneke and Bernard Opoku and Awokoya Ayodele and Verrah Otiende and Christine Mwase and Boyd Sinkala and Andre Niyongabo Rubungo and Daniel A. Ajisafe and Emeka Felix Onwuegbuzia and Habib Mbow and Emile Niyomutabazi and Eunice Mukonde and Falalu Ibrahim Lawan and Ibrahim Said Ahmad and Jesujoba O. Alabi and Martin Namukombo and Mbonu Chinedu and Mofya Phiri and Neo Putini and Ndumiso Mngoma and Priscilla A. Amuok and Ruqayya Nasir Iro and Sonia Adhiambo},
year={2023},
eprint={2305.06897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ToluClassics](https://github.com/ToluClassics) for adding this dataset. |
npvinHnivqn/VietEngDictionary | 2023-07-15T15:50:33.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:vi",
"language:en",
"license:afl-3.0",
"region:us"
] | npvinHnivqn | null | null | null | 0 | 13 | ---
license: afl-3.0
task_categories:
- translation
language:
- vi
- en
size_categories:
- 10K<n<100K
--- |
heliosbrahma/mental_health_conversational_dataset | 2023-07-22T11:30:56.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"medical",
"region:us"
] | heliosbrahma | null | null | null | 2 | 13 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 102904
num_examples: 154
download_size: 60865
dataset_size: 102904
license: mit
task_categories:
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: Mental Health Conversational Dataset
size_categories:
- n<1K
---
# Dataset Card for "mental_health_conversational_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. |
emozilla/booksum-summary-analysis_llama-16384 | 2023-07-23T18:24:22.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: chapter
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 210534702.2666892
num_examples: 11808
- name: validation
num_bytes: 43846669.0
num_examples: 2234
- name: test
num_bytes: 27106410.273220748
num_examples: 1657
download_size: 134314056
dataset_size: 281487781.53990996
---
# Dataset Card for "booksum-summary-analysis_llama-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/airoboros-gpt4-1.4_alpaca | 2023-07-27T18:42:45.000Z | [
"region:us"
] | HydraLM | null | null | null | 1 | 13 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 58132330
num_examples: 34203
download_size: 0
dataset_size: 58132330
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "airoboros-gpt4-1.4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/chemistry_dataset_standardized | 2023-07-27T17:15:19.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 13 | Entry not found |
iulusoy/test-data-2 | 2023-10-04T11:21:03.000Z | [
"region:us"
] | iulusoy | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: Sentences
sequence: string
- name: Labels
sequence: int64
- name: Span_begin
sequence: int64
- name: Span_end
sequence: int64
- name: Span_label
sequence: string
splits:
- name: train
num_bytes: 36481
num_examples: 103
download_size: 11243
dataset_size: 36481
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-data-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
foduucom/table-detection-yolo | 2023-08-05T14:42:23.000Z | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"foduuai",
"table",
"Documents",
"bordered table",
"borderless table",
"unstructured document",
"region:us"
] | foduucom | null | null | null | 5 | 13 | ---
task_categories:
- object-detection
tags:
- foduuai
- table
- Documents
- bordered table
- borderless table
- unstructured document
language:
- en
pretty_name: TableBorderNet
size_categories:
- 1K<n<10K
---
<div align="center">
<img width="640" alt="foduucom/table-detection-yolo" src="https://huggingface.co/datasets/foduucom/table-detection-yolo/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("foduucom/table-detection-yolo", name="full")
example = ds['train'][0]
```
### Dataset Summary
Certainly! Here's a dataset summary for your dataset of images containing tables that are categorized as border and borderless, provided in YOLO format:
## Dataset Summary
The **Table Detection Dataset** is a curated collection of images, each depicting tables that are classified as either 'bordered' or 'borderless'. The dataset is provided in YOLO format, featuring annotations for accurate object detection and classification. It serves as a valuable resource for researchers, developers, and practitioners working on table detection tasks, with a specific focus on distinguishing between tables with distinct visual characteristics.
**Key Features:**
- **Image Variety:** The dataset encompasses a diverse range of images, capturing tables from various real-world scenarios and environments.
- **Annotation Precision:** Each image is meticulously annotated with bounding box coordinates and class labels, indicating whether the table is 'bordered' or 'borderless'.
- **YOLO Format:** Annotations follow the YOLO format, making it suitable for training and evaluating object detection models.
- **Research and Development:** The dataset is designed to facilitate advancements in table detection algorithms and technologies, enabling the development of models capable of accurately identifying and classifying different types of tables.
Whether you are working on document analysis, data extraction, or image-based content recognition, the Table Detection Dataset provides an essential foundation for enhancing the capabilities of object detection models in identifying tables with varying visual attributes. By offering a comprehensive collection of border and borderless tables, this dataset empowers the AI community to tackle challenges in table detection across a wide range of applications.
For more details and access to the dataset, please refer to info@foduu.com . |
owkin/medical_knowledge_from_extracts | 2023-09-22T09:37:42.000Z | [
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"region:us"
] | owkin | null | null | null | 2 | 13 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
---
This dataset is used to train LLMs for medical knowledge extraction tasks |
duckaiml/mc4_310 | 2023-08-19T22:29:41.000Z | [
"license:other",
"region:us"
] | duckaiml | null | null | null | 0 | 13 | ---
license: other
dataset_info:
config_name: ko
features:
- name: source
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: added
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: metadata
struct:
- name: url
dtype: string
- name: lang
struct:
- name: ko.tfrecord
dtype: float64
splits:
- name: train
num_bytes: 151177516676
num_examples: 24035493
download_size: 16185376673
dataset_size: 151177516676
configs:
- config_name: ko
data_files:
- split: train
path: ko/train-*
---
mc4 but in HPC friendly parquet format (32GiB shards)
Attribution,license, copyright info: [Google](https://www.tensorflow.org/datasets/catalog/c4) and [AI^2](https://huggingface.co/datasets/allenai/c4) for producing and uploading them.
|
Ichsan2895/alpaca-gpt4-indonesian | 2023-08-19T13:08:53.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:id",
"license:cc-by-sa-4.0",
"region:us"
] | Ichsan2895 | null | null | null | 2 | 13 | ---
license: cc-by-sa-4.0
language:
- id
size_categories:
- 10K<n<100K
task_categories:
- question-answering
---
Base model : [FreedomIntelligence/alpaca-gpt4-indonesian](https://huggingface.co/datasets/FreedomIntelligence/alpaca-gpt4-indonesian)
We wrangled the original dataset format to 'input' & 'output' format. For example:
BEFORE:
```
[ { "from": "human",
"value": "Saranlah slogan untuk kampanye daur ulang\n" },
{ "from": "gpt",
"value": "1. \"Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. \
"Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. \
"Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak.\""
}
]
```
AFTER:
| input | output |
| ---- | ---- |
| Saranlah slogan untuk kampanye daur ulang\n | 1. "Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. "Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. "Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak. |
## CITATION
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
@software{Chen_MultilingualSIFT_Multilingual_Supervised_2023,
author = {Chen, Zhihong and Yan, Shuo and Liang, Juhao and Jiang, Feng and Wu, Xiangbo and Yu, Fei and Chen, Guiming Hardy and Chen, Junying and Zhang, Hongbo and Li Jianquan and Wan Xiang and Wang, Benyou},
month = july,
title = {{MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning}},
url = {https://github.com/FreedomIntelligence/MultilingualSIFT.git},
version = {0.1},
year = {2023}
}
``` |
fridriik/mental-health-arg-post-quarantine-covid19-dataset | 2023-08-27T18:13:37.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | fridriik | null | null | null | 0 | 13 | ---
license: cc-by-nc-4.0
task_categories:
- tabular-classification
language:
- es
pretty_name: Mental health of people in Argentina post quarantine COVID-19 Dataset
size_categories:
- 1K<n<10K
---
# Mental health of people in Argentina post quarantine COVID-19 Dataset
### Dataset Summary
Dataset modified for research from:
Levels and predictors of depression, anxiety, and suicidal risk during COVID-19 pandemic in Argentina:
The impacts of quarantine extensions on mental health state created by López Steinmetz, Lorena Cecilia for Universidad Nacional de Córdoba.
Facultad de Psicología; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas.
Instituto de Investigaciones Psicológicas; Argentina.
http://hdl.handle.net/11086/20168
The dataset underwent modifications as follows:
SUB PERIODS and SEX columns were removed.
Rows with PROVINCE equal to 'Otro' or 'other' were removed.
Additionally, rows with EDUCATION equal to 'Otro' were removed.
The following columns were transformed from non-numeric values to numeric values:
```
'MENTAL DISORDER HISTORY': {'no': 0, 'yes': 50}
'EDUCATION': {
'Completed postgraduate': 30,
'Incomplete tertiary or university': 60,
'Completed high school': 70,
'Incomplete postgraduate': 40,
'Completed tertiary or university': 50,
'Incomplete high school': 80,
'Incomplete elementary school': 100,
'Completed elementary school': 90}
'SUIC ATTEMPT HISTORY': {'ideation': 50, 'no': 0, 'yes': 100}
'LIVING WITH SOMEBODY': {'no': 20, 'yes': 0}
'ECONOMIC INCOME': {'yes': 0, 'no': 50}
```
Furthermore, a new column 'REGION' was added to provinces according to the following assignment function:
```
def assign_region(province):
if province in ['Corrientes', 'Chaco', 'Misiones', 'Formosa', 'Entre Ríos']:
return 'Nordeste-Litoral'
elif province in ['Tucumán', 'Jujuy', 'Salta', 'Catamarca', 'Santiago del Estero']:
return 'Noroeste'
elif province in ['San Luis', 'San Juan', 'Mendoza', 'La Rioja']:
return 'Cuyo'
elif province in ['Neuquén', 'Río Negro', 'La Pampa']:
return 'Patagonia Centro-Norte'
elif province in ['Tierra del Fuego', 'Santa Cruz', 'Chubut']:
return 'Patagonia Centro-Sur'
elif province == 'Santa Fe':
return 'Santa Fe'
elif province == 'Buenos Aires provincia':
return 'Buenos Aires'
elif province == 'Córdoba':
return 'Córdoba'
else:
return 'CABA'
```
### Supported Tasks and Leaderboards
`mental-health-arg-post-quarantine-covid19-model`:
The dataset can be used to train a model for Mental health of people in Argentina post quarantine COVID-19.
### Languages
The text in the dataset is in Spanish and English
## Dataset Structure
### Data Instances
```
{
'EDUCATION': '30',
'PROVINCE': 'CABA (Buenos Aires capital)',
'AGE': '30',
'MENTAL DISORDER HISTORY': '0',
'SUIC ATTEMPT HISTORY': '50',
'LIVING WITH SOMEBODY': '20'
'ECONOMIC INCOME': '0',
'DEPRESSION': '21',
'SUIC RISK': '37',
'ANXIETY STATE': '54',
'ANXIETY TRAIT': '40',
'REGION': 'CABA'
}
```
### Data Fields
- `EDUCATION`: Maximum level of education attained by the individual, modified:
'Completed postgraduate': 30,
'Incomplete tertiary or university': 60,
'Completed high school': 70,
'Incomplete postgraduate': 40,
'Completed tertiary or university': 50,
'Incomplete high school': 80,
'Incomplete elementary school': 100,
'Completed elementary school': 90
- `PROVINCE`: Name of the province where the individual resides.
- `AGE`: Age of the individual.
- `MENTAL DISORDER HISTORY`: If the individual has a history of mental disorder, modified: 'no': 0, 'yes': 50.
- `SUIC ATTEMPT HISTORY`: If the individual has a history of suicide attempt, modifed: 'ideation': 50, 'no': 0, 'yes': 100.
- `LIVING WITH SOMEBODY`: If the individual lives alone or not, modified: 'no': 20, 'yes': 0.
- `ECONOMIC INCOME`: If the individual has an economic income, modified: 'yes': 0, 'no': 50.
- `DEPRESSION`: Level of depression of the individual.
- `SUIC RISK`: Level of suicide risk of the individual.
- `ANXIETY STATE`: Level of anxiety state at the moment of the individual.
- `ANXIETY TRAIT`: Level of anxiety predisposition of the individual.
- `REGION`: Name of the region where the individual resides.
## Dataset Creation
### Curation Rationale
This dataset was built for research.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained and created by López Steinmetz, Lorena Cecilia.
#### Who are the source language producers?
López Steinmetz, Lorena Cecilia.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is for research, it has data about serious topics related to individuals' mental health.
It should not be taken as practical advice for real-life situations, except for the possibility that in the future,
the dataset could be improved and discussions with its authors could facilitate extended usage.
## Additional Information
### Dataset Curators
The dataset was initially created by López Steinmetz and Lorena Cecilia, modified by Farias Federico, Arroyo Guadalupe and Avalos Manuel.
### Licensing Information
Except where otherwise noted, this item's license is described as
Atribución-NoComercial 4.0 Internacional (http://creativecommons.org/licenses/by-nc/4.0/).
|
dim/horoscopes_ru_1k | 2023-08-31T22:51:58.000Z | [
"region:us"
] | dim | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prediction
dtype: string
splits:
- name: train
num_bytes: 952167
num_examples: 1000
download_size: 462523
dataset_size: 952167
---
# Dataset Card for "horoscopes_ru_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distil-whisper/peoples_speech-dirty | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | The People's Speech is a free-to-download 30,000-hour and growing supervised
conversational English speech recognition dataset licensed for academic and
commercial usage under CC-BY-SA (with a CC-BY subset). | @article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos andcd
Juan Ciro and
Juan Felipe Ceron and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: A Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 13 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: People's Speech Other
---
# Distil Whisper: People's Speech Other
This is a variant of the [People's Speech Other](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-dirty", "dirty")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-dirty", "dirty", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
AlexWortega/habr_qa_sbs | 2023-09-04T09:49:31.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:apache-2.0",
"code",
"finance",
"region:us"
] | AlexWortega | null | null | null | 3 | 13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: best
dtype: string
- name: bad
dtype: string
splits:
- name: train
num_bytes: 119263751
num_examples: 102558
download_size: 66726288
dataset_size: 119263751
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- ru
tags:
- code
- finance
pretty_name: habr_qa_sbs
size_categories:
- 10K<n<100K
---
# Habr sbs qa
Датасет основан на сайте habr qa, лучший ответ - тот на котором есть лайки, худший - тот на котором меньше всего лайков.
Датасет собран [Love.Death.Transformers.](https://t.me/lovedeathtransformers) и [Дата-Утренник](https://t.me/data_morning)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dot-ammar/AR-dotless-small | 2023-09-11T15:35:39.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ar",
"region:us"
] | dot-ammar | null | null | null | 0 | 13 | ---
language:
- ar
size_categories:
- 10K<n<100K
task_categories:
- translation
pretty_name: f
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: clean
dtype: string
- name: dotless
dtype: string
splits:
- name: train
num_bytes: 18718829.46787407
num_examples: 103403
download_size: 10451596
dataset_size: 18718829.46787407
---
# Dataset Card for "AR-dotless-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nzindoc/dataset-multiple-myeloma | 2023-09-11T14:43:31.000Z | [
"license:apache-2.0",
"region:us"
] | nzindoc | null | null | null | 0 | 13 | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 903374
num_examples: 1012
download_size: 75259
dataset_size: 903374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JoseAntonioPer/data-for-llama2 | 2023-09-23T23:41:51.000Z | [
"region:us"
] | JoseAntonioPer | null | null | null | 0 | 13 | Entry not found |
jaejoo/llama-2-ko-law | 2023-09-08T05:13:09.000Z | [
"size_categories:1K<n<10K",
"language:ko",
"license:apache-2.0",
"legal",
"region:us"
] | jaejoo | null | null | null | 4 | 13 | ---
license: apache-2.0
language:
- ko
tags:
- legal
size_categories:
- 1K<n<10K
--- |
ChrisZhang312/law_stackexchange_cleaned | 2023-09-08T21:40:50.000Z | [
"region:us"
] | ChrisZhang312 | null | null | null | 2 | 13 | Entry not found |
dongyoung4091/shp_with_features_20k_flan_t5_large_external_rm1 | 2023-09-12T09:10:28.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
- name: log_score_A
dtype: float64
- name: log_score_B
dtype: float64
- name: external_rm1_A
dtype: float64
- name: external_rm1_B
dtype: float64
splits:
- name: train
num_bytes: 20858406
num_examples: 9459
- name: test
num_bytes: 20811284
num_examples: 9459
download_size: 24209228
dataset_size: 41669690
---
# Dataset Card for "shp_with_features_20k_flan_t5_large_external_rm1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roa7n/maltaomics_dataset_small | 2023-09-11T09:42:01.000Z | [
"region:us"
] | roa7n | null | null | null | 0 | 13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 466637
num_examples: 1600
- name: test
num_bytes: 118558
num_examples: 400
download_size: 570273
dataset_size: 585195
---
# Dataset Card for "maltaomics_dataset_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
judy93536/pharsebank_5k | 2023-09-13T00:12:28.000Z | [
"region:us"
] | judy93536 | null | null | null | 0 | 13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 542147
num_examples: 3999
- name: test
num_bytes: 137048
num_examples: 999
download_size: 379517
dataset_size: 679195
---
# Dataset Card for "pharsebank_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
phanvancongthanh/ChEMBL | 2023-09-28T00:26:48.000Z | [
"region:us"
] | phanvancongthanh | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: standardized_smiles
dtype: string
splits:
- name: train
num_bytes: 144970534.8961496
num_examples: 2372527
download_size: 76075633
dataset_size: 144970534.8961496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ChEMBL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
miraeconan/network-data | 2023-09-10T14:17:10.000Z | [
"license:cc0-1.0",
"region:us"
] | miraeconan | null | null | null | 0 | 13 | ---
license: cc0-1.0
---
|
pietrolesci/imdb | 2023-09-11T16:19:05.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': neg
'1': pos
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 33632823
num_examples: 25000
- name: test
num_bytes: 32850685
num_examples: 25000
download_size: 41729077
dataset_size: 66483508
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 38700000
num_examples: 25000
- name: test
num_bytes: 38700000
num_examples: 25000
download_size: 108242075
dataset_size: 77400000
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 77100000
num_examples: 25000
- name: test
num_bytes: 77100000
num_examples: 25000
download_size: 185073496
dataset_size: 154200000
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 77100000
num_examples: 25000
- name: test
num_bytes: 77100000
num_examples: 25000
download_size: 185072395
dataset_size: 154200000
---
# Dataset Card for "imdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
avisheknayak/testad1 | 2023-09-12T07:04:54.000Z | [
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"region:us"
] | avisheknayak | null | null | null | 0 | 13 | ---
task_categories:
- summarization
language:
- en
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
vinc0814/123 | 2023-09-12T05:52:07.000Z | [
"license:afl-3.0",
"region:us"
] | vinc0814 | null | null | null | 0 | 13 | ---
license: afl-3.0
---
|
kamaludeen/fututech-colorectal-cancer | 2023-09-13T01:17:03.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"microbiome",
"tabular",
"gut-microbiota",
"region:us"
] | kamaludeen | null | null | null | 0 | 13 | ---
task_categories:
- tabular-classification
tags:
- microbiome
- tabular
- gut-microbiota
pretty_name: Colorectal Carcinoma Feng Q 2015
size_categories:
- n<1K
---
## Publication Abstract
Colorectal cancer, a commonly diagnosed cancer in the elderly, often develops slowly from benign polyps called adenoma. The gut microbiota is believed to be directly involved in colorectal carcinogenesis. The identity and functional capacity of the adenoma- or carcinoma-related gut microbe(s), however, have not been surveyed in a comprehensive manner. Here we perform a metagenome-wide association study (MGWAS) on stools from advanced adenoma and carcinoma patients and from healthy subjects, revealing microbial genes, strains and functions enriched in each group. An analysis of potential risk factors indicates that high intake of red meat relative to fruits and vegetables appears to associate with outgrowth of bacteria that might contribute to a more hostile gut environment. These findings suggest that faecal microbiome-based strategies may be useful for early diagnosis and treatment of colorectal adenoma or carcinoma.
## Dataset
156 metagenomic shotgun-sequenced faecal samples from colorectal adenoma and carcinoma patients and healthy controls
### Configurations
- `presence-absence`
- `CLR`
## Usage
```python
dataset = load_dataset("wwydmanski/colorectal-carcinoma-microbiome-fengq", "presence-absence")
train_dataset, test_dataset = dataset['train'], dataset['test']
X_train = np.array(train_dataset['values'])
y_train = np.array(train_dataset['target'])
X_test = np.array(test_dataset['values'])
y_test = np.array(test_dataset['target'])
``` |
totally-not-an-llm/alpacamix | 2023-09-14T23:50:13.000Z | [
"license:other",
"region:us"
] | totally-not-an-llm | null | null | null | 1 | 13 | ---
license: other
---
|
nzindoc/dataset-multiple-myeloma-study-dictionary | 2023-09-14T04:00:10.000Z | [
"region:us"
] | nzindoc | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1285724160
num_examples: 1975680
download_size: 30419669
dataset_size: 1285724160
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset-multiple-myeloma-study-dictionary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanHE/data_for_rag | 2023-09-14T07:22:46.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:zh",
"region:us"
] | ChanHE | null | null | null | 0 | 13 | ---
task_categories:
- text-generation
language:
- zh
size_categories:
- n<1K
--- |
HydraLM/clustered_1 | 2023-09-14T10:21:47.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
- name: embedding
sequence: float64
- name: text_processed
dtype: string
- name: __index_level_0__
dtype: int64
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 17476162280
num_examples: 1472917
download_size: 12523176003
dataset_size: 17476162280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "clustered_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/b799dbcd | 2023-09-14T18:55:19.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 201
num_examples: 10
download_size: 1351
dataset_size: 201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b799dbcd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/6f7d81b5 | 2023-09-14T19:03:59.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 172
num_examples: 10
download_size: 1326
dataset_size: 172
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6f7d81b5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/e2d73282 | 2023-09-14T19:16:09.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 159
num_examples: 10
download_size: 1312
dataset_size: 159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e2d73282"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/909d509a | 2023-09-14T20:32:33.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 221
num_examples: 10
download_size: 1393
dataset_size: 221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "909d509a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/c3f0d903 | 2023-09-14T20:36:26.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1372
dataset_size: 180
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c3f0d903"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/f5ba0a23 | 2023-09-14T21:06:54.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 204
num_examples: 10
download_size: 1375
dataset_size: 204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "f5ba0a23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/707d50d0 | 2023-09-16T12:04:34.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 150
num_examples: 10
download_size: 1285
dataset_size: 150
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "707d50d0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/6971f242 | 2023-09-16T20:24:51.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 227
num_examples: 10
download_size: 1445
dataset_size: 227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6971f242"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/31ba9674 | 2023-09-16T20:24:52.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 227
num_examples: 10
download_size: 1445
dataset_size: 227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "31ba9674"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.