id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
FinGPT/fingpt-headline | 2023-10-10T06:31:55.000Z | [
"region:us"
] | FinGPT | null | null | 1 | 25 | 2023-10-10T06:31:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 13343930
num_examples: 82161
- name: test
num_bytes: 3339415
num_examples: 20547
download_size: 647377
dataset_size: 16683345
---
# Dataset Card for "fingpt-headline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 621 | [
[
-0.047149658203125,
-0.0276336669921875,
0.0174407958984375,
0.021514892578125,
-0.0209808349609375,
-0.005420684814453125,
0.0145721435546875,
-0.01424407958984375,
0.05157470703125,
0.04461669921875,
-0.05548095703125,
-0.05072021484375,
-0.04608154296875,
... |
tomashs/LSC_acronyms_topic_vectors_128 | 2023-10-10T23:26:07.000Z | [
"region:us"
] | tomashs | null | null | 0 | 25 | 2023-10-10T23:23:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: short_form
dtype: string
- name: long_form
dtype: string
- name: label
dtype: int64
- name: topic_vector
sequence: float64
splits:
- name: train
num_bytes: 469862809
num_examples: 352720
- name: validation
num_bytes: 100339691
num_examples: 75339
- name: test
num_bytes: 100732958
num_examples: 75540
download_size: 604818064
dataset_size: 670935458
---
# Dataset Card for "LSC_acronyms_topic_vectors_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 843 | [
[
-0.045013427734375,
-0.01255035400390625,
0.015350341796875,
0.007053375244140625,
-0.02630615234375,
0.0205230712890625,
0.0208892822265625,
0.01071929931640625,
0.0701904296875,
0.01401519775390625,
-0.0596923828125,
-0.058380126953125,
-0.049774169921875,
... |
sordonia/platy_icl0_maxD1000000_maxC1000_2 | 2023-10-12T00:00:44.000Z | [
"region:us"
] | sordonia | null | null | 0 | 25 | 2023-10-12T00:00:31 | ## model_setting_name: platy
## max_context_length: 512
## icl_examples: 0
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000000
## max_contexts_per_subject: 1000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
| 291 | [
[
-0.034637451171875,
-0.029052734375,
0.028656005859375,
0.03778076171875,
-0.02764892578125,
-0.0174407958984375,
-0.004032135009765625,
0.0157318115234375,
-0.00732421875,
0.0361328125,
-0.0638427734375,
-0.0416259765625,
-0.02734375,
0.0178680419921875,
... |
Ahmed007/nadsoft-jo-data | 2023-10-18T11:17:39.000Z | [
"region:us"
] | Ahmed007 | null | null | 0 | 25 | 2023-10-18T11:17:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 608242960.1156812
num_examples: 4539
- name: test
num_bytes: 67671893.9183188
num_examples: 505
download_size: 661582950
dataset_size: 675914854.0339999
---
# Dataset Card for "nadsoft-jo-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.04193115234375,
-0.0207366943359375,
0.01531219482421875,
0.021728515625,
-0.015655517578125,
-0.002338409423828125,
0.01462554931640625,
-0.0146484375,
0.059051513671875,
0.042205810546875,
-0.06817626953125,
-0.060394287109375,
-0.03680419921875,
-0.007... |
quyanh/dolly | 2023-10-24T15:59:27.000Z | [
"region:us"
] | quyanh | null | null | 0 | 25 | 2023-10-19T03:34:09 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: inputs
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 14079200
num_examples: 15011
download_size: 7841758
dataset_size: 14079200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dolly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.02880859375,
-0.0221099853515625,
-0.0010938644409179688,
0.016510009765625,
-0.01035308837890625,
-0.0086822509765625,
0.0386962890625,
-0.01010894775390625,
0.0640869140625,
0.047454833984375,
-0.0584716796875,
-0.048065185546875,
-0.0504150390625,
-0.0... |
JawadIshtiaq/Shoe_Designs | 2023-10-23T09:30:53.000Z | [
"region:us"
] | JawadIshtiaq | null | null | 0 | 25 | 2023-10-19T14:32:39 | image_urls,captions | 19 | [
[
-0.023834228515625,
-0.018096923828125,
0.043212890625,
0.04931640625,
-0.0589599609375,
-0.0282745361328125,
0.009979248046875,
-0.015625,
0.0038738250732421875,
0.059326171875,
-0.027496337890625,
-0.025482177734375,
-0.0089111328125,
0.02667236328125,
... |
GHOFRANEE/ALCORA | 2023-10-20T18:25:47.000Z | [
"region:us"
] | GHOFRANEE | null | null | 0 | 25 | 2023-10-20T15:26:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 22778282.0
num_examples: 90
- name: validation
num_bytes: 22778282.0
num_examples: 90
- name: test
num_bytes: 22778282.0
num_examples: 90
download_size: 5849067
dataset_size: 68334846.0
---
# Dataset Card for "ALCORA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 697 | [
[
-0.04547119140625,
-0.021636962890625,
0.0194091796875,
0.020904541015625,
-0.021514892578125,
0.00421905517578125,
0.0305938720703125,
-0.018829345703125,
0.08135986328125,
0.031463623046875,
-0.053985595703125,
-0.07147216796875,
-0.035919189453125,
-0.031... |
dhruv107/receipt_oct23_combined_pro | 2023-10-25T06:11:11.000Z | [
"region:us"
] | dhruv107 | null | null | 0 | 25 | 2023-10-23T17:04:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 501006502.0
num_examples: 490
- name: test
num_bytes: 33933542.0
num_examples: 32
- name: validation
num_bytes: 108768954.0
num_examples: 92
download_size: 564490992
dataset_size: 643708998.0
---
# Dataset Card for "receipt_oct23_combined_pro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.03448486328125,
0.004383087158203125,
0.010101318359375,
0.018035888671875,
-0.04132080078125,
-0.0005636215209960938,
0.029266357421875,
-0.025634765625,
0.05963134765625,
0.0509033203125,
-0.046051025390625,
-0.040252685546875,
-0.042022705078125,
-0.00... |
RayLy/so-llama2-500 | 2023-10-24T07:21:57.000Z | [
"region:us"
] | RayLy | null | null | 0 | 25 | 2023-10-24T07:19:45 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 709086
num_examples: 265
download_size: 177937
dataset_size: 709086
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "so-llama2-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 437 | [
[
-0.038787841796875,
-0.0054473876953125,
0.026092529296875,
0.032562255859375,
-0.01824951171875,
-0.0042724609375,
0.038665771484375,
-0.01448822021484375,
0.072021484375,
0.03668212890625,
-0.07012939453125,
-0.0439453125,
-0.038970947265625,
-0.0015392303... |
H4438/multichoices_prompt | 2023-10-28T05:25:25.000Z | [
"region:us"
] | H4438 | null | null | 0 | 25 | 2023-10-26T07:14:10 | ---
dataset_info:
features:
- name: metadata
struct:
- name: chapter
dtype: string
- name: difficult_degree
dtype: int64
- name: grade
dtype: string
- name: id
dtype: string
- name: idx
dtype: int64
- name: subject
dtype: string
- name: question
dtype: string
- name: options
list:
- name: answer
dtype: string
- name: key
dtype: string
- name: answer
struct:
- name: answer
dtype: string
- name: key
dtype: string
- name: solution
dtype: string
- name: quality
struct:
- name: has_image
dtype: bool
- name: missing_question
dtype: bool
- name: missing_solution
dtype: bool
- name: type
dtype: string
splits:
- name: train
num_bytes: 81085870
num_examples: 85177
download_size: 41726437
dataset_size: 81085870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "multichoices_prompt"
- Filter out **<img** in _solution_ and _question_
- Remove "English" subject
- Remove **all options above** option
- Remove **Đáp án cần chọn.*[ABCD]** in _solution_
- Filters out many of English texts
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,366 | [
[
-0.05621337890625,
-0.047576904296875,
0.02301025390625,
0.0156097412109375,
-0.044525146484375,
0.016082763671875,
-0.0153045654296875,
-0.01450347900390625,
0.033599853515625,
0.06048583984375,
-0.074951171875,
-0.05816650390625,
-0.04644775390625,
0.03518... |
CJWeiss/eurlexsum | 2023-10-26T20:46:54.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 25 | 2023-10-26T20:46:45 | ---
dataset_info:
features:
- name: celex_id
dtype: string
- name: reference
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 109972638
num_examples: 1128
- name: test
num_bytes: 18741974
num_examples: 225
- name: valid
num_bytes: 12084163
num_examples: 151
download_size: 56318842
dataset_size: 140798775
---
# Dataset Card for "eurlexsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 557 | [
[
-0.0347900390625,
-0.006664276123046875,
0.011962890625,
0.0105133056640625,
-0.00795745849609375,
0.006816864013671875,
0.0240325927734375,
-0.01371002197265625,
0.0677490234375,
0.04229736328125,
-0.051055908203125,
-0.05706787109375,
-0.03448486328125,
-0... |
wisenut-nlp-team/FiD_aihub_books | 2023-10-30T04:59:27.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | 0 | 25 | 2023-10-30T00:12:11 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 11133875890
num_examples: 900000
- name: validation
num_bytes: 613048834
num_examples: 50000
download_size: 4288972879
dataset_size: 11746924724
---
# Dataset Card for "FiD_aihub_books"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.045928955078125,
-0.022918701171875,
-0.00844573974609375,
-0.002315521240234375,
-0.0138092041015625,
0.003398895263671875,
0.0306854248046875,
-0.00829315185546875,
0.046234130859375,
0.041229248046875,
-0.052215576171875,
-0.052581787109375,
-0.03317260742... |
Adminhuggingface/LORA_ONE | 2023-10-30T07:27:42.000Z | [
"region:us"
] | Adminhuggingface | null | null | 0 | 25 | 2023-10-30T07:27:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2895341.0
num_examples: 12
download_size: 2896554
dataset_size: 2895341.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LORA_ONE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.0440673828125,
-0.03668212890625,
0.006908416748046875,
0.01435089111328125,
-0.02484130859375,
-0.016754150390625,
0.035736083984375,
-0.01306915283203125,
0.084228515625,
0.05621337890625,
-0.06219482421875,
-0.06024169921875,
-0.03643798828125,
-0.0275... |
Geonmo/laion-rvs-fashion-caption-only | 2023-10-31T01:08:26.000Z | [
"region:us"
] | Geonmo | null | null | 0 | 25 | 2023-10-30T10:49:40 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 64727598
num_examples: 1436088
download_size: 39909300
dataset_size: 64727598
---
# Dataset Card for "laion-rvs-fashion-caption-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 378 | [
[
-0.02069091796875,
-0.006954193115234375,
0.01708984375,
0.031890869140625,
-0.033294677734375,
0.001232147216796875,
0.01605224609375,
0.004547119140625,
0.06219482421875,
0.06439208984375,
-0.07275390625,
-0.05926513671875,
-0.0299072265625,
-0.01546478271... |
maywell/wikidata_QA | 2023-10-31T02:14:57.000Z | [
"region:us"
] | maywell | null | null | 5 | 25 | 2023-10-31T02:09:29 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 173708015
num_examples: 163982
download_size: 104708888
dataset_size: 173708015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# 한국어 위키 데이터 QA셋
본 데이터는 Synatra-7B-Instruct 모델과 ChatGPT를 사용하여, 제작된 QA셋입니다.
해당 데이터를 직접적으로 상업적으로 사용하는 것은 허용되지 않으며, 데이터를 이용하여 훈련된 모델에 대한 상업적 사용은 허용됩니다.
아직 완벽히 정제되지는 않았으며, 오류나 수정사항에 대해서는 PR 부탁드립니다.
| 566 | [
[
-0.02606201171875,
-0.06439208984375,
0.01541900634765625,
0.034149169921875,
-0.0452880859375,
0.015655517578125,
0.01454925537109375,
-0.008697509765625,
0.0252532958984375,
0.02923583984375,
-0.034698486328125,
-0.035858154296875,
-0.051910400390625,
0.00... |
cellar-door/dolly-1k-std | 2023-11-01T07:56:59.000Z | [
"region:us"
] | cellar-door | null | null | 0 | 25 | 2023-11-01T07:56:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FanFan/sentiment-amazon-test | 2022-03-08T05:56:20.000Z | [
"region:us"
] | FanFan | null | null | 0 | 24 | 2022-03-08T05:56:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
taln-ls2n/taln-archives | 2022-09-23T07:58:07.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"language:fr",
"language:en",
"license:cc-by-4.0",
"region:us"
] | taln-ls2n | TALN Archives benchmark dataset for keyphrase extraction an generation. | @inproceedings{boudin-2013-taln,
title = "{TALN} Archives : a digital archive of {F}rench research articles in Natural Language Processing ({TALN} Archives : une archive num{\'e}rique francophone des articles de recherche en Traitement Automatique de la Langue) [in {F}rench]",
author = "Boudin, Florian",
booktitle = "Proceedings of TALN 2013 (Volume 2: Short Papers)",
month = jun,
year = "2013",
address = "Les Sables d{'}Olonne, France",
publisher = "ATALA",
url = "https://aclanthology.org/F13-2001",
pages = "507--514",
} | 3 | 24 | 2022-04-19T13:45:33 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: TALN-Archives
---
# TALN-Archives Benchmark Dataset for Keyphrase Generation
## About
TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 1207 abstracts of scientific papers in French collected from the [TALN Archives](http://talnarchives.atala.org/).
Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries).
English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation.
Details about the dataset can be found in the original paper [(Boudin, 2013)][boudin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 1207 | 138.3 | 4.12 | 53.83 | 12.32 | 21.69 | 12.16 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **translation**: translations of title, abstract and keyphrases in English if available.
## References
- (Boudin, 2013) Florian Boudin. 2013.
[TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]][boudin-2013].
In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[boudin-2013]: https://aclanthology.org/F13-2001/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | 3,445 | [
[
-0.0193634033203125,
-0.038177490234375,
0.024169921875,
0.0176544189453125,
-0.03387451171875,
0.01020050048828125,
-0.0164947509765625,
-0.00919342041015625,
0.0147857666015625,
0.0243072509765625,
-0.037506103515625,
-0.064208984375,
-0.047332763671875,
0... |
Fhrozen/FSD50k | 2022-05-27T08:50:25.000Z | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:10K<n<100K",
"source_datasets:unknown",
"license:cc-by-4.0",
"arxiv:2010.00475",
"region:us"
] | Fhrozen | null | null | 1 | 24 | 2022-05-06T08:51:56 | ---
license: cc-by-4.0
annotations_creators:
- unknown
language_creators:
- unknown
size_categories:
- 10K<n<100K
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids:
- other-audio-slot-filling
---
# Freesound Dataset 50k (FSD50K)
## Important
**This data set is a copy from the original one located at Zenodo.**
## Dataset Description
- **Homepage:** [FSD50K](https://zenodo.org/record/4060432)
- **Repository:** [GitHub](https://github.com/edufonseca/FSD50K_baseline)
- **Paper:** [FSD50K: An Open Dataset of Human-Labeled Sound Events](https://arxiv.org/abs/2010.00475)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/fsd50k)
## Citation
If you use the FSD50K dataset, or part of it, please cite our paper:
>Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, Xavier Serra. "FSD50K: an Open Dataset of Human-Labeled Sound Events", arXiv 2020.
### Data curators
Eduardo Fonseca, Xavier Favory, Jordi Pons, Mercedes Collado, Ceren Can, Rachit Gupta, Javier Arredondo, Gary Avendano and Sara Fernandez
### Contact
You are welcome to contact Eduardo Fonseca should you have any questions at eduardo.fonseca@upf.edu.
## About FSD50K
Freesound Dataset 50k (or **FSD50K** for short) is an open dataset of human-labeled sound events containing 51,197 <a href="https://freesound.org/">Freesound</a> clips unequally distributed in 200 classes drawn from the <a href="https://research.google.com/audioset/ontology/index.html">AudioSet Ontology</a> [1]. FSD50K has been created at the <a href="https://www.upf.edu/web/mtg">Music Technology Group of Universitat Pompeu Fabra</a>.
What follows is a brief summary of FSD50K's most important characteristics. Please have a look at our paper (especially Section 4) to extend the basic information provided here with relevant details for its usage, as well as discussion, limitations, applications and more.
**Basic characteristics:**
- FSD50K is composed mainly of sound events produced by physical sound sources and production mechanisms.
- Following AudioSet Ontology’s main families, the FSD50K vocabulary encompasses mainly *Human sounds*, *Sounds of things*, *Animal*, *Natural sounds* and *Music*.
- The dataset has 200 sound classes (144 leaf nodes and 56 intermediate nodes) hierarchically organized with a subset of the AudioSet Ontology. The vocabulary can be inspected in `vocabulary.csv` (see Files section below).
- FSD50K contains 51,197 audio clips totalling 108.3 hours of audio.
- The audio content has been manually labeled by humans following a data labeling process using the <a href="https://annotator.freesound.org/">Freesound Annotator</a> platform [2].
- Clips are of variable length from 0.3 to 30s, due to the diversity of the sound classes and the preferences of Freesound users when recording sounds.
- Ground truth labels are provided at the clip-level (i.e., weak labels).
- The dataset poses mainly a multi-label sound event classification problem (but also allows a variety of sound event research tasks, see Sec. 4D).
- All clips are provided as uncompressed PCM 16 bit 44.1 kHz mono audio files.
- The audio clips are grouped into a development (*dev*) set and an evaluation (*eval*) set such that they do not have clips from the same Freesound uploader.
**Dev set:**
- 40,966 audio clips totalling 80.4 hours of audio
- Avg duration/clip: 7.1s
- 114,271 smeared labels (i.e., labels propagated in the upwards direction to the root of the ontology)
- Labels are correct but could be occasionally incomplete
- A train/validation split is provided (Sec. 3H). If a different split is used, it should be specified for reproducibility and fair comparability of results (see Sec. 5C of our paper)
**Eval set:**
- 10,231 audio clips totalling 27.9 hours of audio
- Avg duration/clip: 9.8s
- 38,596 smeared labels
- Eval set is labeled exhaustively (labels are correct and complete for the considered vocabulary)
**NOTE:** All classes in FSD50K are represented in AudioSet, except `Crash cymbal`, `Human group actions`, `Human voice`, `Respiratory sounds`, and `Domestic sounds, home sounds`.
## License
All audio clips in FSD50K are released under Creative Commons (CC) licenses. Each clip has its own license as defined by the clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. For attribution purposes and to facilitate attribution of these files to third parties, we include a mapping from the audio clips to their corresponding licenses. The licenses are specified in the files `dev_clips_info_FSD50K.json` and `eval_clips_info_FSD50K.json`. These licenses are CC0, CC-BY, CC-BY-NC and CC Sampling+.
In addition, FSD50K as a whole is the result of a curation process and it has an additional license: FSD50K is released under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY</a>. This license is specified in the `LICENSE-DATASET` file downloaded with the `FSD50K.doc` zip file.
## Files
FSD50K can be downloaded as a series of zip files with the following directory structure:
<div class="highlight"><pre><span></span>root
│
└───clips/ Audio clips
│ │
│ └─── dev/ Audio clips in the dev set
│ │
│ └─── eval/ Audio clips in the eval set
│
└───labels/ Files for FSD50K's ground truth
│ │
│ └─── dev.csv Ground truth for the dev set
│ │
│ └─── eval.csv Ground truth for the eval set
│ │
│ └─── vocabulary.csv List of 200 sound classes in FSD50K
│
└───metadata/ Files for additional metadata
│ │
│ └─── class_info_FSD50K.json Metadata about the sound classes
│ │
│ └─── dev_clips_info_FSD50K.json Metadata about the dev clips
│ │
│ └─── eval_clips_info_FSD50K.json Metadata about the eval clips
│ │
│ └─── pp_pnp_ratings_FSD50K.json PP/PNP ratings
│ │
│ └─── collection/ Files for the *sound collection* format
│
│
└───README.md The dataset description file that you are reading
│
└───LICENSE-DATASET License of the FSD50K dataset as an entity
</pre></div>
Each row (i.e. audio clip) of `dev.csv` contains the following information:
- `fname`: the file name without the `.wav` extension, e.g., the fname `64760` corresponds to the file `64760.wav` in disk. This number is the Freesound id. We always use Freesound ids as filenames.
- `labels`: the class labels (i.e., the ground truth). Note these class labels are *smeared*, i.e., the labels have been propagated in the upwards direction to the root of the ontology. More details about the label smearing process can be found in Appendix D of our paper.
- `mids`: the Freebase identifiers corresponding to the class labels, as defined in the <a href="https://github.com/audioset/ontology/blob/master/ontology.json">AudioSet Ontology specification</a>
- `split`: whether the clip belongs to *train* or *val* (see paper for details on the proposed split)
Rows in `eval.csv` follow the same format, except that there is no `split` column.
**NOTE:** We use a slightly different format than AudioSet for the naming of class labels in order to avoid potential problems with spaces, commas, etc. Example: we use `Accelerating_and_revving_and_vroom` instead of the original `Accelerating, revving, vroom`. You can go back to the original AudioSet naming using the information provided in `vocabulary.csv` (class label and mid for the 200 classes of FSD50K) and the <a href="https://github.com/audioset/ontology/blob/master/ontology.json">AudioSet Ontology specification</a>.
### Files with additional metadata (metadata/)
To allow a variety of analysis and approaches with FSD50K, we provide the following metadata:
1. `class_info_FSD50K.json`: python dictionary where each entry corresponds to one sound class and contains: `FAQs` utilized during the annotation of the class, `examples` (representative audio clips), and `verification_examples` (audio clips presented to raters during annotation as a quality control mechanism). Audio clips are described by the Freesound id.
**NOTE:** It may be that some of these examples are not included in the FSD50K release.
2. `dev_clips_info_FSD50K.json`: python dictionary where each entry corresponds to one dev clip and contains: title, description, tags, clip license, and the uploader name. All these metadata are provided by the uploader.
3. `eval_clips_info_FSD50K.json`: same as before, but with eval clips.
4. `pp_pnp_ratings.json`: python dictionary where each entry corresponds to one clip in the dataset and contains the PP/PNP ratings for the labels associated with the clip. More specifically, these ratings are gathered for the labels validated in **the validation task** (Sec. 3 of paper). This file includes 59,485 labels for the 51,197 clips in FSD50K. Out of these labels:
- 56,095 labels have inter-annotator agreement (PP twice, or PNP twice). Each of these combinations can be occasionally accompanied by other (non-positive) ratings.
- 3390 labels feature other rating configurations such as *i)* only one PP rating and one PNP rating (and nothing else). This can be considered inter-annotator agreement at the ``Present” level; *ii)* only one PP rating (and nothing else); *iii)* only one PNP rating (and nothing else).
Ratings' legend: PP=1; PNP=0.5; U=0; NP=-1.
**NOTE:** The PP/PNP ratings have been provided in the *validation* task. Subsequently, a subset of these clips corresponding to the eval set was exhaustively labeled in the *refinement* task, hence receiving additional labels in many cases. For these eval clips, you might want to check their labels in `eval.csv` in order to have more info about their audio content (see Sec. 3 for details).
5. `collection/`: This folder contains metadata for what we call the ***sound collection format***. This format consists of the raw annotations gathered, featuring all generated class labels without any restriction.
We provide the *collection* format to make available some annotations that do not appear in the FSD50K *ground truth* release. This typically happens in the case of classes for which we gathered human-provided annotations, but that were discarded in the FSD50K release due to data scarcity (more specifically, they were merged with their parents). In other words, the main purpose of the `collection` format is to make available annotations for tiny classes. The format of these files in analogous to that of the files in `FSD50K.ground_truth/`. A couple of examples show the differences between **collection** and **ground truth** formats:
`clip`: `labels_in_collection` -- `labels_in_ground_truth`
`51690`: `Owl` -- `Bird,Wild_Animal,Animal`
`190579`: `Toothbrush,Electric_toothbrush` -- `Domestic_sounds_and_home_sounds`
In the first example, raters provided the label `Owl`. However, due to data scarcity, `Owl` labels were merged into their parent `Bird`. Then, labels `Wild_Animal,Animal` were added via label propagation (smearing). The second example shows one of the most extreme cases, where raters provided the labels `Electric_toothbrush,Toothbrush`, which both had few data. Hence, they were merged into Toothbrush's parent, which unfortunately is `Domestic_sounds_and_home_sounds` (a rather vague class containing a variety of children sound classes).
**NOTE:** Labels in the collection format are not smeared.
**NOTE:** While in FSD50K's ground truth the vocabulary encompasses 200 classes (common for dev and eval), since the *collection* format is composed of raw annotations, the vocabulary here is much larger (over 350 classes), and it is slightly different in dev and eval.
For further questions, please contact eduardo.fonseca@upf.edu, or join the <a href="https://groups.google.com/g/freesound-annotator">freesound-annotator Google Group</a>.
## Download
Clone this repository:
```
git clone https://huggingface.co/Fhrozen/FSD50k
```
## Baseline System
Several baseline systems for FSD50K are available at <a href="https://github.com/edufonseca/FSD50K_baseline">https://github.com/edufonseca/FSD50K_baseline</a>. The experiments are described in Sec 5 of our paper.
## References and links
[1] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. "Audio set: An ontology and human-labeled dataset for audio events." In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2017. [<a href="https://ai.google/research/pubs/pub45857">PDF</a>]
[2] Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra. "Freesound Datasets: A Platform for the Creation of Open Audio Datasets." In Proceedings of the International Conference on Music Information Retrieval, 2017. [<a href="https://repositori.upf.edu/bitstream/handle/10230/33299/fonseca_ismir17_freesound.pdf">PDF</a>]
Companion site for FSD50K: <a href="https://annotator.freesound.org/fsd/release/FSD50K/">https://annotator.freesound.org/fsd/release/FSD50K/</a>
Freesound Annotator: <a href="https://annotator.freesound.org/">https://annotator.freesound.org/</a>
Freesound: <a href="https://freesound.org">https://freesound.org</a>
Eduardo Fonseca's personal website: <a href="http://www.eduardofonseca.net/">http://www.eduardofonseca.net/</a>
More datasets collected by us: <a href="http://www.eduardofonseca.net/datasets/">http://www.eduardofonseca.net/datasets/</a>
## Acknowledgments
The authors would like to thank everyone who contributed to FSD50K with annotations, and especially Mercedes Collado, Ceren Can, Rachit Gupta, Javier Arredondo, Gary Avendano and Sara Fernandez for their commitment and perseverance. The authors would also like to thank Daniel P.W. Ellis and Manoj Plakal from Google Research for valuable discussions. This work is partially supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688382 <a href="https://www.audiocommons.org/">AudioCommons</a>, and two Google Faculty Research Awards <a href="https://ai.googleblog.com/2018/03/google-faculty-research-awards-2017.html">2017</a> and <a href="https://ai.googleblog.com/2019/03/google-faculty-research-awards-2018.html">2018</a>, and the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502).
| 14,780 | [
[
-0.0465087890625,
-0.00534820556640625,
0.01416778564453125,
0.014373779296875,
-0.0195770263671875,
-0.0071868896484375,
-0.030914306640625,
-0.036712646484375,
0.03863525390625,
0.038055419921875,
-0.06964111328125,
-0.061248779296875,
-0.0194854736328125,
... |
MicPie/unpredictable_support-google-com | 2022-08-04T20:15:33.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | 0 | 24 | 2022-07-03T09:06:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-support-google-com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| 14,815 | [
[
-0.041290283203125,
-0.04034423828125,
0.032684326171875,
0.023162841796875,
0.0066070556640625,
0.010528564453125,
-0.01033782958984375,
-0.043121337890625,
0.037506103515625,
0.0198211669921875,
-0.07525634765625,
-0.04681396484375,
-0.047271728515625,
0.0... |
relbert/lexical_relation_classification | 2022-07-20T23:24:17.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [Lexical Relation Classification](https://aclanthology.org/P19-1169/) | @inproceedings{wang-etal-2019-spherere,
title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings",
author = "Wang, Chengyu and
He, Xiaofeng and
Zhou, Aoying",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1169",
doi = "10.18653/v1/P19-1169",
pages = "1727--1737",
abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.",
} | 1 | 24 | 2022-07-20T22:45:48 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Lexical Relation Classification
---
# Dataset Card for "relbert/lexical_relation_classification"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/P19-1169/](https://aclanthology.org/P19-1169/)
- **Dataset:** Lexical Relation Classification
### Dataset Summary
Five different datasets (`BLESS`, `CogALexV`, `EVALution`, `K&H+N`, `ROOT09`) for lexical relation classification used in [SphereRE](https://www.aclweb.org/anthology/P19-1169/).
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
| name | train | validation | test |
|---------------|------:|-------:|-----:|
| `BLESS` | 18582 | 1327 | 6637 |
| `CogALexV` | 3054 | - | 4260 |
| `EVALution` | 5160 | 372 | 1846 |
| `K&H+N` | 40256 | 2876 | 14377 |
| `ROOT09` | 8933 | 638 | 3191 |
## Dataset Structure
### Data Instances
An example looks as follows.
```
{"head": "turtle", "tail": "live", "relation": "event"}
```
The `stem` and `tail` are the word pair and `relation` is the corresponding relation label.
### Citation Information
```
@inproceedings{wang-etal-2019-spherere,
title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings",
author = "Wang, Chengyu and
He, Xiaofeng and
Zhou, Aoying",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1169",
doi = "10.18653/v1/P19-1169",
pages = "1727--1737",
abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.",
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
| 2,917 | [
[
-0.029266357421875,
-0.0494384765625,
0.01910400390625,
0.00836944580078125,
-0.030242919921875,
-0.0192413330078125,
-0.0279388427734375,
-0.033721923828125,
0.045684814453125,
0.007343292236328125,
-0.02935791015625,
-0.0540771484375,
-0.03741455078125,
0.... |
Vipitis/Shadertoys | 2023-06-26T19:04:58.000Z | [
"task_categories:text-generation",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:10K<n<100K",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"code",
"region:us"
] | Vipitis | null | null | 5 | 24 | 2022-07-24T15:08:41 | ---
annotations_creators:
- no-annotation
language:
- en
- code
language_creators:
- machine-generated
license:
- cc-by-nc-sa-3.0
multilinguality: []
pretty_name: Shadertoys
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- code
task_categories:
- text-generation
- text-to-image
task_ids: []
dataset_info:
features:
- name: num_passes
dtype: int64
- name: has_inputs
dtype: bool
- name: name
dtype: string
- name: type
dtype: string
- name: code
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: tags
sequence: string
- name: author
dtype: string
- name: license
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 162960894
num_examples: 37841
- name: test
num_bytes: 26450429
num_examples: 6617
download_size: 86294414
dataset_size: 189411323
---
# Dataset Card for Shadertoys
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** https://github.com/Vipitis/project (private placeholder)
### Dataset Summary
The Shadertoys dataset contains over 44k renderpasses collected from the Shadertoy.com API. Some shader programm contain multiple render passes.
To browse a subset of this dataset, look at the [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderCoder) space. A finer variant of this dataset is [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine).
### Supported Tasks and Leaderboards
`text-generation` the dataset can be used to train generative language models, for code completion tasks.
`ShaderEval` [task1](https://huggingface.co/spaces/Vipitis/ShaderEval) from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.
### Languages
- English (title, description, tags, comments)
- Shadercode **programming** language, a subset of GLSL specifically for Shadertoy.com
## Dataset Structure
### Data Instances
A data point consists of the whole shadercode, some information from the API as well as additional metadata.
```
{
'num_passes': 1,
'has_inputs': False,
'name': 'Image',
'type': 'image',
'code': '<full code>',
'title': '<title of the shader>',
'description': '<description of the shader>',
'tags': ['tag1','tag2','tag3', ... ],
'license': 'unknown',
'author': '<username>',
'source': 'https://shadertoy.com/view/<shaderID>'
}
```
### Data Fields
- 'num_passes' number of passes the parent shader program has
- 'has_inputs' if any inputs were used like textures, audio streams,
- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc
- 'type' type of the renderpass; one of `{'buffer', 'common', 'cubemap', 'image', 'sound'}`
- 'code' the raw code (including comments) the whole renderpass.
- 'title' Name of the Shader
- 'description' description given for the Shader
- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.
- 'license' currently in development
- 'author' username of the shader author
- 'source' URL to the shader. Not to the specific renderpass.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.
The code to generate these datasets should be published on the GitHub repository in the near future.
#### Who are the source language producers?
Shadertoy.com contributers which publish shaders as 'public+API'
## Licensing Information
The Default [license for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.
The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.
Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.
Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders. | 4,863 | [
[
-0.033935546875,
-0.0213165283203125,
0.016693115234375,
0.033477783203125,
-0.00567626953125,
0.00806427001953125,
-0.00147247314453125,
-0.04583740234375,
0.008453369140625,
0.045745849609375,
-0.06292724609375,
-0.06646728515625,
-0.0133209228515625,
0.00... |
truongpdd/vietnamese_poetry | 2022-09-23T04:30:49.000Z | [
"region:us"
] | truongpdd | null | null | 2 | 24 | 2022-09-23T04:30:31 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-750000-800000 | 2022-10-04T22:48:41.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T17:52:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-850000-900000 | 2022-10-04T23:55:21.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T17:55:29 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-950000-1000000 | 2022-10-04T22:55:50.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T18:01:11 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-1150000-1200000 | 2022-10-04T23:45:42.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T23:45:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1100000-1150000 | 2022-10-04T23:49:53.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T23:49:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1050000-1100000 | 2022-10-04T23:53:15.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T23:53:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1400000-1450000 | 2022-10-04T23:57:23.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T23:57:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1000000-1050000 | 2022-10-04T23:58:45.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-04T23:58:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1350000-1400000 | 2022-10-05T00:06:19.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-05T00:06:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-1300000-1350000 | 2022-10-05T00:06:32.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 24 | 2022-10-05T00:06:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tglcourse/5s_birdcall_samples_top20 | 2022-10-27T07:34:37.000Z | [
"license:unknown",
"region:us"
] | tglcourse | null | null | 1 | 24 | 2022-10-27T07:26:02 | ---
license:
- unknown
pretty_name: 5s Birdcall Samples
---
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: https://www.kaggle.com/code/johnowhitaker/peak-identification
The audio is represented at 32kHz (mono) | 454 | [
[
-0.055633544921875,
-0.0111846923828125,
-0.00763702392578125,
0.034820556640625,
-0.01161956787109375,
0.00408172607421875,
0.00267791748046875,
-0.05218505859375,
0.0199432373046875,
0.0274505615234375,
-0.06805419921875,
-0.0164031982421875,
-0.01811218261718... |
LYTinn/sentiment-analysis-tweet | 2022-10-31T03:54:49.000Z | [
"region:us"
] | LYTinn | null | null | 0 | 24 | 2022-10-29T03:29:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bigbio/muchmore | 2022-12-22T15:45:43.000Z | [
"multilinguality:multilingual",
"language:en",
"language:de",
"license:unknown",
"region:us"
] | bigbio | The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS. | @inproceedings{buitelaar2003multi,
title={A multi-layered, xml-based approach to the integration of linguistic and semantic annotations},
author={Buitelaar, Paul and Declerck, Thierry and Sacaleanu, Bogdan and Vintar, {\v{S}}pela and Raileanu, Diana and Crispi, Claudia},
booktitle={Proceedings of EACL 2003 Workshop on Language Technology and the Semantic Web (NLPXML'03), Budapest, Hungary},
year={2003}
} | 0 | 24 | 2022-11-13T22:10:14 |
---
language:
- en
- de
bigbio_language:
- English
- German
license: unknown
multilinguality: multilingual
bigbio_license_shortname: UNKNOWN
pretty_name: MuchMore
homepage: https://muchmore.dfki.de/resources1.htm
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- TRANSLATION
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for MuchMore
## Dataset Description
- **Homepage:** https://muchmore.dfki.de/resources1.htm
- **Pubmed:** True
- **Public:** True
- **Tasks:** TRANSL,NER,NED,RE
The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS.
## Citation Information
```
@inproceedings{buitelaar2003multi,
title={A multi-layered, xml-based approach to the integration of linguistic and semantic annotations},
author={Buitelaar, Paul and Declerck, Thierry and Sacaleanu, Bogdan and Vintar, {{S}}pela and Raileanu, Diana and Crispi, Claudia},
booktitle={Proceedings of EACL 2003 Workshop on Language Technology and the Semantic Web (NLPXML'03), Budapest, Hungary},
year={2003}
}
```
| 1,832 | [
[
-0.030975341796875,
-0.039093017578125,
0.03765869140625,
0.014617919921875,
-0.016326904296875,
-0.0234222412109375,
-0.0276031494140625,
-0.0428466796875,
0.02069091796875,
0.03167724609375,
-0.0257720947265625,
-0.08026123046875,
-0.044158935546875,
0.043... |
bigbio/pubtator_central | 2022-12-22T15:46:26.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | PubTator Central (PTC, https://www.ncbi.nlm.nih.gov/research/pubtator/) is a web service for
exploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides
automated annotations from state-of-the-art text mining systems for genes/proteins, genetic
variants, diseases, chemicals, species and cell lines, all available for immediate download. PTC
annotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript
Collection (3 million full text articles). Updated entity identification methods and a
disambiguation module based on cutting-edge deep learning techniques provide increased accuracy. | @article{10.1093/nar/gkz389,
title = {{PubTator central: automated concept annotation for biomedical full text articles}},
author = {Wei, Chih-Hsuan and Allot, Alexis and Leaman, Robert and Lu, Zhiyong},
year = 2019,
month = {05},
journal = {Nucleic Acids Research},
volume = 47,
number = {W1},
pages = {W587-W593},
doi = {10.1093/nar/gkz389},
issn = {0305-1048},
url = {https://doi.org/10.1093/nar/gkz389},
eprint = {https://academic.oup.com/nar/article-pdf/47/W1/W587/28880193/gkz389.pdf}
} | 1 | 24 | 2022-11-13T22:11:49 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NCBI_LICENSE
pretty_name: PubTator Central
homepage: https://www.ncbi.nlm.nih.gov/research/pubtator/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for PubTator Central
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/pubtator/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
PubTator Central (PTC, https://www.ncbi.nlm.nih.gov/research/pubtator/) is a web service for
exploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides
automated annotations from state-of-the-art text mining systems for genes/proteins, genetic
variants, diseases, chemicals, species and cell lines, all available for immediate download. PTC
annotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript
Collection (3 million full text articles). Updated entity identification methods and a
disambiguation module based on cutting-edge deep learning techniques provide increased accuracy.
## Citation Information
```
@article{10.1093/nar/gkz389,
title = {{PubTator central: automated concept annotation for biomedical full text articles}},
author = {Wei, Chih-Hsuan and Allot, Alexis and Leaman, Robert and Lu, Zhiyong},
year = 2019,
month = {05},
journal = {Nucleic Acids Research},
volume = 47,
number = {W1},
pages = {W587-W593},
doi = {10.1093/nar/gkz389},
issn = {0305-1048},
url = {https://doi.org/10.1093/nar/gkz389},
eprint = {https://academic.oup.com/nar/article-pdf/47/W1/W587/28880193/gkz389.pdf}
}
```
| 1,817 | [
[
-0.0281829833984375,
-0.029449462890625,
0.023040771484375,
-0.0006403923034667969,
-0.045989990234375,
0.0101470947265625,
-0.0128326416015625,
-0.021209716796875,
0.028289794921875,
0.031463623046875,
-0.03350830078125,
-0.07293701171875,
-0.04791259765625,
... |
Shunian/kaggle-mbti-cleaned | 2022-12-16T09:46:54.000Z | [
"region:us"
] | Shunian | null | null | 2 | 24 | 2022-12-15T06:30:41 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 51657719
num_examples: 327828
- name: test
num_bytes: 12922409
num_examples: 81957
download_size: 42682844
dataset_size: 64580128
---
# Dataset Card for "kaggle-mbti-cleaned"
This dataset originated from Kaggle [(MBTI) Myers-Briggs Personality Type Dataset](https://www.kaggle.com/datasets/datasnaek/mbti-type).
Some cleaning operations are made to this dataset to make it in a usable format for text classification process.
See more detail in [GitHub](https://github.com/nogibjj/MBTI-Personality-Test)
| 660 | [
[
-0.0302734375,
-0.052032470703125,
0.0292510986328125,
-0.013763427734375,
-0.0020904541015625,
0.01071929931640625,
0.0013799667358398438,
-0.006710052490234375,
0.0355224609375,
0.041412353515625,
-0.0604248046875,
-0.030792236328125,
-0.037109375,
-0.0089... |
maximedb/natural_questions | 2022-12-17T08:17:26.000Z | [
"region:us"
] | maximedb | null | null | 0 | 24 | 2022-12-17T08:16:54 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 10087609
num_examples: 130233
- name: validation
num_bytes: 714323
num_examples: 8643
download_size: 6827128
dataset_size: 10801932
---
# Dataset Card for "natural_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 468 | [
[
-0.0640869140625,
-0.06256103515625,
0.008544921875,
0.0089874267578125,
-0.0180206298828125,
-0.004673004150390625,
0.002689361572265625,
-0.0265350341796875,
0.056671142578125,
0.044586181640625,
-0.06585693359375,
-0.03338623046875,
-0.01515960693359375,
... |
irds/nyt | 2023-01-05T03:47:43.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 24 | 2023-01-05T03:47:37 | ---
pretty_name: '`nyt`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `nyt`
The `nyt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,864,661
This dataset is used by: [`nyt_trec-core-2017`](https://huggingface.co/datasets/irds/nyt_trec-core-2017), [`nyt_wksup`](https://huggingface.co/datasets/irds/nyt_wksup), [`nyt_wksup_train`](https://huggingface.co/datasets/irds/nyt_wksup_train), [`nyt_wksup_valid`](https://huggingface.co/datasets/irds/nyt_wksup_valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/nyt', 'docs')
for record in docs:
record # {'doc_id': ..., 'headline': ..., 'body': ..., 'source_xml': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Sandhaus2008Nyt,
title={The new york times annotated corpus},
author={Sandhaus, Evan},
journal={Linguistic Data Consortium, Philadelphia},
volume={6},
number={12},
pages={e26752},
year={2008}
}
```
| 1,329 | [
[
-0.0172271728515625,
-0.026123046875,
0.0018825531005859375,
-0.003452301025390625,
-0.0267486572265625,
0.01085662841796875,
-0.014739990234375,
-0.0248565673828125,
0.04473876953125,
0.01959228515625,
-0.0195465087890625,
-0.04931640625,
-0.04217529296875,
... |
keremberke/protective-equipment-detection | 2023-01-18T21:21:55.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"region:us"
] | keremberke | null | @misc{ ppes-kaxsi_dataset,
title = { PPEs Dataset },
type = { Open Source Dataset },
author = { Personal Protective Equipment },
howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-18 },
} | 1 | 24 | 2023-01-17T20:53:31 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Manufacturing
---
<div align="center">
<img width="640" alt="keremberke/protective-equipment-detection" src="https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes']
```
### Number of Images
```json
{'valid': 3570, 'test': 1935, 'train': 6473}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/protective-equipment-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7](https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7?ref=roboflow2huggingface)
### Citation
```
@misc{ ppes-kaxsi_dataset,
title = { PPEs Dataset },
type = { Open Source Dataset },
author = { Personal Protective Equipment },
howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on July 7, 2022 at 3:49 PM GMT
It includes 11978 images.
Ppe-equipements are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| 1,891 | [
[
-0.033660888671875,
-0.0180206298828125,
0.0250396728515625,
-0.00585174560546875,
-0.0295257568359375,
-0.0102691650390625,
0.0081329345703125,
-0.0270843505859375,
0.0271148681640625,
0.0205230712890625,
-0.050537109375,
-0.0718994140625,
-0.0411376953125,
... |
qwedsacf/competition_math | 2023-01-28T20:28:01.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"explanation-generation",
"arxiv:2103.03874",
"region:us"
... | qwedsacf | null | null | 6 | 24 | 2023-01-28T18:44:57 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Mathematics Aptitude Test of Heuristics (MATH)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- explanation-generation
---
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/pdf/2103.03874.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Dan Hendrycks
### Dataset Summary
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems
from mathematics competitions, including the AMC 10, AMC 12, AIME, and more.
Each problem in MATH has a full step-by-step solution, which can be used to teach
models to generate answer derivations and explanations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag.
An example from the dataset is:
```
{'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.',
'level': 'Level 1',
'type': 'Counting & Probability',
'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'}
```
### Data Fields
* `problem`: The competition math problem.
* `solution`: The step-by-step solution.
* `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.
* `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
https://github.com/hendrycks/math/blob/main/LICENSE
### Citation Information
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
``` | 4,819 | [
[
-0.0411376953125,
-0.050994873046875,
0.0191802978515625,
0.0251312255859375,
-0.007183074951171875,
0.00939178466796875,
-0.01922607421875,
0.0005307197570800781,
0.0283203125,
0.0174560546875,
-0.054901123046875,
-0.04949951171875,
-0.053009033203125,
0.00... |
dmayhem93/toolformer-v0-postprocessed | 2023-02-28T19:50:45.000Z | [
"region:us"
] | dmayhem93 | null | null | 5 | 24 | 2023-02-28T19:50:26 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 79229133
num_examples: 2245
download_size: 33861921
dataset_size: 79229133
---
# Dataset Card for "toolformer-v0-postprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 372 | [
[
-0.025238037109375,
-0.007198333740234375,
0.0194854736328125,
0.018402099609375,
-0.017974853515625,
0.001300811767578125,
0.0185546875,
-0.0011548995971679688,
0.05950927734375,
0.0498046875,
-0.05718994140625,
-0.048736572265625,
-0.056365966796875,
-0.01... |
ruanchaves/porsimplessent | 2023-04-12T15:57:26.000Z | [
"size_categories:1K<n<10K",
"region:us"
] | ruanchaves | 1 | 24 | 2023-03-12T17:45:24 | ---
size_categories:
- 1K<n<10K
---
# Dataset Card for PorSimplesSent
## Dataset Description
- **Repository:** [sidleal/porsimplessent](https://github.com/sidleal/porsimplessent)
- **Paper:** [A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese](https://aclanthology.org/C18-1034/)
- **Point of Contact:** [Sidney Evaldo Leal](sidleal@gmail.com)
### Dataset Summary
PorSimplesSent is a Portuguese corpus of aligned sentence pairs and triplets created for the purpose of investigating sentence readability
assessment in Portuguese. The dataset consists of 4,968 pairs and 1,141 triplets of sentences, combining the three levels of the PorSimples
corpus: Original, Natural, and Strong. The dataset can be used for tasks such as sentence-pair classification, sentence retrieval, and readability assessment.
### Supported Tasks and Leaderboards
The dataset supports the following tasks:
- `sentence-pair-classification`: The dataset can be used to train a model for sentence-pair classification, which consists in determining whether one sentence is simpler than the other or if both sentences are equally simple. Success on this task is typically measured by achieving a high accuracy, f1, precision, and recall.
### Languages
The dataset consists of sentence pairs in Portuguese.
## Dataset Structure
### Data Instances
```json
{
'sentence1': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno cotidiano e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'sentence2': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno comum e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'label': 2,
'production_id': 3,
'level': 'ORI->NAT',
'changed': 'S',
'split': 'N',
'sentence_text_from': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno cotidiano e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'sentence_text_to': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno comum e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.'
}
```
### Data Fields
The dataset has the following fields:
* `sentence1`: the first sentence in the sentence pair (string).
* `sentence2`: the second sentence in the sentence pair (string).
* `label`: an integer indicating the relationship between the two sentences in the pair. The possible values are 0, 1, and 2, where 0 means that sentence1 is more simple than sentence2, 1 means that both sentences have the same level of complexity, and 2 means that sentence2 is more simple than sentence1 (int).
* `production_id`: an integer identifier for each sentence pair (int).
* `level`: a string indicating the level of simplification between the two sentences. The possible values are:
* 'ORI->NAT' (original to natural)
* 'NAT->STR' (natural to strong)
* 'ORI->STR' (original to strong) (string).
* `changed`: a string indicating whether the sentence was changed during the simplification process. The possible values are:
* 'S' (changed)
* 'N' (not changed) (string).
* `split`: a string indicating whether the sentence suffered a split in this simplification level. The possible values are:
* 'S' (split)
* 'N' (not split) (string).
* `sentence_text_from`: the raw text of the source sentence (string).
* `sentence_text_to`: the raw text of the target sentence (string).
### Data Splits
The dataset is split into three subsets: train, validation, and test. The sizes of each split are as follows:
| | Train | Validation | Test |
|--------------------|--------|------------|-------|
| Number of examples | 4,976 | 1,446 | 1,697 |
The authors did not provide standard splits. We created the splits ourselves while ensuring that sentence pairs from the same document did not appear in multiple splits.
## Additional Information
### Dataset Curators
The PorSimplesSent dataset was created by Sidney Evaldo Leal, with guidance from his advisors Dra. Sandra Maria Aluísio and Dra. Magali Sanches Duran, during his master's degree at ICMC-USP. The Interinstitutional Center for Computational Linguistics - NILC (Núcleo Interinstitucional de Linguística Computacional) also contributed to the creation of the dataset.
### Licensing Information
The PorSimplesSent dataset is released under the CC BY 4.0 license. The license terms can be found at https://creativecommons.org/licenses/by/4.0/.
### Citation Information
If you use this dataset in your work, please cite the following publication:\
```bibtex
@inproceedings{leal2018pss,
author = {Sidney Evaldo Leal and Magali Sanches Duran and Sandra Maria Aluíso},
title = {A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese},
booktitle = {Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018)},
year = {2018},
pages = {401-413},
month = {August},
date = {20-26},
address = {Santa Fe, New Mexico, USA},
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | 5,313 | [
[
-0.006134033203125,
-0.0660400390625,
0.030029296875,
0.03851318359375,
-0.0294189453125,
-0.024139404296875,
-0.0379638671875,
-0.020904541015625,
0.0210113525390625,
0.035980224609375,
-0.037322998046875,
-0.061279296875,
-0.051666259765625,
0.037933349609... | ||
guangyil/yelp_short_v2 | 2023-03-21T10:10:28.000Z | [
"region:us"
] | guangyil | null | null | 0 | 24 | 2023-03-21T10:10:00 | ---
dataset_info:
features:
- name: bert_token
sequence: int64
- name: gpt2_token
sequence: int64
splits:
- name: train
num_bytes: 89578672.0
num_examples: 447259
- name: test
num_bytes: 222800.0
num_examples: 1000
download_size: 21476776
dataset_size: 89801472.0
---
# Dataset Card for "yelp_short_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.00946807861328125,
-0.0200653076171875,
0.03167724609375,
-0.004047393798828125,
-0.0201416015625,
-0.016571044921875,
0.022705078125,
-0.020416259765625,
0.06512451171875,
0.035430908203125,
-0.06585693359375,
-0.03948974609375,
-0.0231781005859375,
-0.0... |
bigbio/bronco | 2023-04-01T16:47:31.000Z | [
"multilinguality:monolingual",
"language:de",
"region:us"
] | bigbio | BRONCO150 is a corpus containing selected sentences of 150 German discharge summaries of cancer patients (hepatocelluar
carcinoma or melanoma) treated at Charite Universitaetsmedizin Berlin or Universitaetsklinikum Tuebingen. All discharge
summaries were manually anonymized. The original documents were scrambled at the sentence level to make reconstruction
of individual reports impossible. | @article{10.1093/jamiaopen/ooab025,
author = {Kittner, Madeleine and Lamping, Mario and Rieke, Damian T and Götze, Julian and Bajwa, Bariya and
Jelas, Ivan and Rüter, Gina and Hautow, Hanjo and Sänger, Mario and Habibi, Maryam and Zettwitz, Marit and
Bortoli, Till de and Ostermann, Leonie and Ševa, Jurica and Starlinger, Johannes and Kohlbacher, Oliver and
Malek, Nisar P and Keilholz, Ulrich and Leser, Ulf},
title = "{Annotation and initial evaluation of a large annotated German oncological corpus}",
journal = {JAMIA Open},
volume = {4},
number = {2},
year = {2021},
month = {04},
issn = {2574-2531},
doi = {10.1093/jamiaopen/ooab025},
url = {https://doi.org/10.1093/jamiaopen/ooab025},
note = {ooab025},
eprint = {https://academic.oup.com/jamiaopen/article-pdf/4/2/ooab025/38830128/ooab025.pdf},
} | 2 | 24 | 2023-04-01T16:46:42 | ---
language:
- de
bigbio_language:
- German
multilinguality: monolingual
pretty_name: BRONCO150
homepage: https://www2.informatik.hu-berlin.de/~leser/bronco/index.html
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for BRONCO150
## Dataset Description
- **Homepage:** https://www2.informatik.hu-berlin.de/~leser/bronco/index.html
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER, NED
BRONCO150 is a corpus containing selected sentences of 150 German discharge summaries of cancer patients (hepatocelluar carcinoma or melanoma) treated at Charite Universitaetsmedizin Berlin or Universitaetsklinikum Tuebingen. All discharge summaries were manually anonymized. The original documents were scrambled at the sentence level to make reconstruction of individual reports impossible.
## Citation Information
```
@article{10.1093/jamiaopen/ooab025,
author = {Kittner, Madeleine and Lamping, Mario and Rieke, Damian T and Götze, Julian and Bajwa, Bariya and Jelas, Ivan and Rüter, Gina and Hautow, Hanjo and Sänger, Mario and Habibi, Maryam and Zettwitz, Marit and Bortoli, Till de and Ostermann, Leonie and Ševa, Jurica and Starlinger, Johannes and Kohlbacher, Oliver and Malek, Nisar P and Keilholz, Ulrich and Leser, Ulf},
title = "{Annotation and initial evaluation of a large annotated German oncological corpus}",
journal = {JAMIA Open},
volume = {4},
number = {2},
year = {2021},
month = {04},
issn = {2574-2531},
doi = {10.1093/jamiaopen/ooab025},
url = {https://doi.org/10.1093/jamiaopen/ooab025},
note = {ooab025},
eprint = {https://academic.oup.com/jamiaopen/article-pdf/4/2/ooab025/38830128/ooab025.pdf},
}
```
| 1,773 | [
[
-0.021636962890625,
-0.02569580078125,
0.0220947265625,
0.03411865234375,
-0.01047515869140625,
-0.01427459716796875,
0.004062652587890625,
-0.0122528076171875,
0.0279998779296875,
0.056610107421875,
-0.03875732421875,
-0.07379150390625,
-0.038116455078125,
... |
dvilasuero/databricks-dolly-15k-es-deepl | 2023-04-13T10:28:31.000Z | [
"region:us"
] | dvilasuero | null | null | 1 | 24 | 2023-04-13T09:20:32 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: instruction_en
dtype: string
- name: context_en
dtype: string
- name: response_en
dtype: string
splits:
- name: train
num_bytes: 25838910
num_examples: 15015
download_size: 16464221
dataset_size: 25838910
---
# Dataset Card for "databricks-dolly-15k-es-deepl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 614 | [
[
-0.029571533203125,
-0.02764892578125,
0.0031585693359375,
0.0305328369140625,
-0.0172119140625,
0.0232696533203125,
0.0284576416015625,
-0.001171112060546875,
0.04193115234375,
0.03607177734375,
-0.06982421875,
-0.058929443359375,
-0.035369873046875,
-0.008... |
jiacheng-ye/logiqa-zh | 2023-04-21T00:56:28.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | jiacheng-ye | LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which is designed to test the civil servant candidates’ critical thinking and problem-solving. This dataset includes the Chinese versions only | @article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
} | 14 | 24 | 2023-04-17T12:39:52 | ---
task_categories:
- question-answering
language:
- zh
pretty_name: LogiQA-zh
size_categories:
- 1K<n<10K
paperswithcode_id: logiqa
dataset_info:
features:
- name: context
dtype: string
- name: query
dtype: string
- name: options
sequence:
dtype: string
- name: correct_option
dtype: string
splits:
- name: train
num_examples: 7376
- name: validation
num_examples: 651
- name: test
num_examples: 651
---
# Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the Chinese versions only.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': '有些广东人不爱吃辣椒.因此,有些南方人不爱吃辣椒.',
'query': '以下哪项能保证上述论证的成立?',
'options': ['有些广东人爱吃辣椒',
'爱吃辣椒的有些是南方人',
'所有的广东人都是南方人',
'有些广东人不爱吃辣椒也不爱吃甜食'],
'correct_option': 2}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@jiacheng-ye](https://github.com/jiacheng-ye) added this Chinese dataset.
[@lucasmccabe](https://github.com/lucasmccabe) added the English dataset. | 2,167 | [
[
-0.0090179443359375,
-0.034423828125,
0.019439697265625,
0.0074462890625,
-0.0267486572265625,
-0.01300811767578125,
0.007053375244140625,
-0.017852783203125,
0.0230560302734375,
0.0380859375,
-0.0506591796875,
-0.048736572265625,
-0.0216064453125,
0.0085525... |
sander-wood/wikimusictext | 2023-10-25T15:33:23.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"arxiv:2304.11029",
"region:us"
] | sander-wood | null | null | 5 | 24 | 2023-04-21T13:16:40 | ---
license: mit
task_categories:
- text-classification
- text2text-generation
pretty_name: wikimt
size_categories:
- 1K<n<10K
language:
- en
tags:
- music
---
## Dataset Summary
In [CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval](https://ai-muzic.github.io/clamp/), we introduce WikiMusicText (WikiMT), a new dataset for the evaluation of semantic search and music classification. It includes 1010 lead sheets in ABC notation sourced from Wikifonia.org, each accompanied by a title, artist, genre, and description. The title and artist information is extracted from the score, whereas the genre labels are obtained by matching keywords from the Wikipedia entries and assigned to one of the 8 classes (Jazz, Country, Folk, R&B, Pop, Rock, Dance, and Latin) that loosely mimic the GTZAN genres. The description is obtained by utilizing BART-large to summarize and clean the corresponding Wikipedia entry. Additionally, the natural language information within the ABC notation is removed.
WikiMT is a unique resource to support the evaluation of semantic search and music classification. However, it is important to acknowledge that the dataset was curated from publicly available sources, and there may be limitations concerning the accuracy and completeness of the genre and description information. Further research is needed to explore the potential biases and limitations of the dataset and to develop strategies to address them.
## How to Access Music Score Metadata for ABC Notation
To access metadata related to ABC notation music scores from the WikiMT dataset, follow these steps:
1. **Locate the Wikifonia MusicXML Data Link:** Start by visiting the discussion thread on the forum to find the download link for the Wikifonia dataset in MusicXML format (with a .mxl extension). You can find the discussion here: [Download for Wikifonia all 6,675 Lead Sheets](http://www.synthzone.com/forum/ubbthreads.php/topics/384909/Download_for_Wikifonia_all_6,6).
2. **Run the Provided Code:** Once you have found the Wikifonia MusicXML data link, execute the provided Python code below. This code will handle the following tasks:
- Automatically download the "wikimusictext.jsonl" dataset, which contains metadata associated with music scores.
- Automatically download the "xml2abc.py" conversion script, with special thanks to the author, Willem (Wim).
- Prompt you for the Wikifonia data URL, as follows:
```python
Enter the Wikifonia URL: [Paste your URL here]
```
Paste the URL pointing to the Wikifonia.zip file and press Enter.
The below code will take care of downloading, processing, and extracting the music score metadata, making it ready for your research or applications.
```python
import subprocess
import os
import json
import zipfile
import io
# Install the required packages if they are not installed
try:
from unidecode import unidecode
except ImportError:
subprocess.check_call(["python", '-m', 'pip', 'install', 'unidecode'])
from unidecode import unidecode
try:
from tqdm import tqdm
except ImportError:
subprocess.check_call(["python", '-m', 'pip', 'install', 'tqdm'])
from tqdm import tqdm
try:
import requests
except ImportError:
subprocess.check_call(["python", '-m', 'pip', 'install', 'requests'])
import requests
def filter(lines):
# Filter out all lines that include language information
music = ""
for line in lines:
if line[:2] in ['A:', 'B:', 'C:', 'D:', 'F:', 'G', 'H:', 'I:', 'N:', 'O:', 'R:', 'r:', 'S:', 'T:', 'W:', 'w:', 'X:', 'Z:'] \
or line=='\n' \
or (line.startswith('%') and not line.startswith('%%score')):
continue
else:
if "%" in line and not line.startswith('%%score'):
line = "%".join(line.split('%')[:-1])
music += line[:-1] + '\n'
else:
music += line + '\n'
return music
def load_music(filename):
# Convert the file to ABC notation
p = subprocess.Popen(
f'cmd /u /c python xml2abc_145/xml2abc.py -m 2 -c 6 -x "{filename}"',
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True
)
out, err = p.communicate()
output = out.decode('utf-8').replace('\r', '') # Capture standard output
music = unidecode(output).split('\n')
music = filter(music).strip()
return music
def download_and_extract(url):
print(f"Downloading {url}")
# Send an HTTP GET request to the URL and get the response
response = requests.get(url, stream=True)
if response.status_code == 200:
# Create a BytesIO object and write the HTTP response content into it
zip_data = io.BytesIO()
total_size = int(response.headers.get('content-length', 0))
with tqdm(total=total_size, unit='B', unit_scale=True) as pbar:
for data in response.iter_content(chunk_size=1024):
pbar.update(len(data))
zip_data.write(data)
# Use the zipfile library to extract the file
print("Extracting the zip file...")
with zipfile.ZipFile(zip_data, "r") as zip_ref:
zip_ref.extractall("")
print("Done!")
else:
print("Failed to download the file. HTTP response code:", response.status_code)
# URL of the JSONL file
wikimt_url = "https://huggingface.co/datasets/sander-wood/wikimusictext/resolve/main/wikimusictext.jsonl"
# Local filename to save the downloaded file
local_filename = "wikimusictext.jsonl"
# Download the file and save it locally
response = requests.get(wikimt_url)
if response.status_code == 200:
with open(local_filename, 'wb') as file:
file.write(response.content)
print(f"Downloaded '{local_filename}' successfully.")
else:
print(f"Failed to download. Status code: {response.status_code}")
# Download the xml2abc.py script (special thanks to Wim Vree for creating this script)
download_and_extract("https://wim.vree.org/svgParse/xml2abc.py-145.zip")
# Download the Wikifonia dataset
wikifonia_url = input("Enter the Wikifonia URL: ")
download_and_extract(wikifonia_url)
wikimusictext = []
with open("wikimusictext.jsonl", "r", encoding="utf-8") as f:
for line in f.readlines():
wikimusictext.append(json.loads(line))
updated_wikimusictext = []
for song in tqdm(wikimusictext):
filename = song["artist"] + " - " + song["title"] + ".mxl"
filepath = os.path.join("Wikifonia", filename)
song["music"] = load_music(filepath)
updated_wikimusictext.append(song)
with open("wikimusictext.jsonl", "w", encoding="utf-8") as f:
for song in updated_wikimusictext:
f.write(json.dumps(song, ensure_ascii=False)+"\n")
```
By following these steps and running the provided code, you can efficiently access ABC notation music scores from the WikiMT dataset. Just ensure you have the metadata, the `xml2abc.py` script, and the correct download link before starting. Enjoy your musical journey!
## Copyright Disclaimer
WikiMT was curated from publicly available sources, and all rights to the original content and data remain with their respective copyright holders. The dataset is made available for research and educational purposes, and any use, distribution, or modification of the dataset should comply with the terms and conditions set forth by the original data providers.
## BibTeX entry and citation info
```
@misc{wu2023clamp,
title={CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval},
author={Shangda Wu and Dingyao Yu and Xu Tan and Maosong Sun},
year={2023},
eprint={2304.11029},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` | 7,777 | [
[
-0.044708251953125,
-0.0299835205078125,
0.016265869140625,
0.04083251953125,
-0.01377105712890625,
-0.002117156982421875,
-0.0428466796875,
-0.026031494140625,
0.0124359130859375,
0.0294036865234375,
-0.049560546875,
-0.057708740234375,
-0.031585693359375,
... |
deepghs/nsfw_detect | 2023-05-15T12:08:47.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | deepghs | null | null | 5 | 24 | 2023-05-15T11:57:46 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
The dataset used for training the NSFW Detect classification model is divided into five categories: `drawing`, `hentai`, `neutral`, `porn`, and `sexy`, following the format mentioned in [GantMan/nsfw_model](https://github.com/GantMan/nsfw_model) and [yangbisheng2009/nsfw-resnet](https://github.com/yangbisheng2009/nsfw-resnet). | 392 | [
[
-0.04534912109375,
-0.027984619140625,
0.00963592529296875,
0.00023293495178222656,
-0.027862548828125,
-0.0155029296875,
0.034027099609375,
-0.025634765625,
-0.003925323486328125,
0.047607421875,
-0.04840087890625,
-0.05255126953125,
-0.037506103515625,
0.0... |
yulanfmy/databricks-qa-ja | 2023-05-15T14:55:06.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] | yulanfmy | null | null | 2 | 24 | 2023-05-15T13:27:23 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- ja
size_categories:
- 1K<n<10K
---
# データセット概要
手動で作成したDatabricksに関する質問と回答ペアの日本語データセットです。
- 件数:約1,300件
- 情報源:Databricks HPの日本語ブログやFAQなど、データブリック社員がポストしたQitta記事
https://github.com/yulan-yan/build-your-chat-bot-JP デモに利用したデータです。 | 301 | [
[
-0.03961181640625,
-0.08319091796875,
0.01271820068359375,
0.041595458984375,
-0.02008056640625,
0.01055908203125,
0.00543975830078125,
0.004245758056640625,
0.05133056640625,
0.01206207275390625,
-0.05291748046875,
-0.03076171875,
-0.031097412109375,
0.0001... |
skeskinen/TinyStories-hf | 2023-05-17T18:13:44.000Z | [
"arxiv:2305.07759",
"region:us"
] | skeskinen | null | null | 16 | 24 | 2023-05-17T17:23:20 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1911420483
num_examples: 2119719
- name: validation
num_bytes: 19306310
num_examples: 21990
download_size: 1000775442
dataset_size: 1930726793
---
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStories
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories')
``` | 957 | [
[
-0.0100860595703125,
-0.020843505859375,
0.0132293701171875,
-0.017120361328125,
-0.00489044189453125,
-0.025482177734375,
-0.03497314453125,
-0.0064697265625,
0.0071258544921875,
0.0268402099609375,
-0.045257568359375,
-0.0390625,
-0.0152130126953125,
0.036... |
coeuslearning/customerqueries | 2023-05-20T06:34:15.000Z | [
"region:us"
] | coeuslearning | null | null | 0 | 24 | 2023-05-20T06:33:49 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
9wimu9/eli5_mult_answers_en | 2023-05-29T20:27:50.000Z | [
"region:us"
] | 9wimu9 | null | null | 1 | 24 | 2023-05-29T20:27:22 | ---
dataset_info:
features:
- name: question
dtype: string
- name: contexts
sequence: string
- name: gold_answer
dtype: string
splits:
- name: train
num_bytes: 370188345.3824035
num_examples: 71236
- name: test
num_bytes: 41136657.61759652
num_examples: 7916
download_size: 248739104
dataset_size: 411325003.0
---
# Dataset Card for "eli5_mult_answers_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.06048583984375,
-0.031585693359375,
0.01910400390625,
0.007518768310546875,
-0.00453948974609375,
-0.006191253662109375,
0.0165252685546875,
-0.01544952392578125,
0.04461669921875,
0.0283660888671875,
-0.06011962890625,
-0.041961669921875,
-0.03741455078125,
... |
lansinuote/diffusion.3.dream_booth | 2023-06-01T06:37:02.000Z | [
"region:us"
] | lansinuote | null | null | 0 | 24 | 2023-06-01T06:36:50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5590823.0
num_examples: 5
download_size: 5592148
dataset_size: 5590823.0
---
# Dataset Card for "diffusion.3.dream_booth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 399 | [
[
-0.036895751953125,
-0.03582763671875,
0.03375244140625,
0.0184173583984375,
-0.012481689453125,
-0.0022907257080078125,
0.02587890625,
-0.004604339599609375,
0.061676025390625,
0.0279388427734375,
-0.038238525390625,
-0.0611572265625,
-0.038055419921875,
-0... |
ehartford/based | 2023-06-03T04:23:36.000Z | [
"license:apache-2.0",
"region:us"
] | ehartford | null | null | 15 | 24 | 2023-06-03T04:21:02 | ---
license: apache-2.0
---
This is a minimal dataset that's designed to teach the model to have conversations, and also to have opinions, thoughts, and feelings, and to feel free to share them.
https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations
| 268 | [
[
-0.04949951171875,
-0.05511474609375,
0.0206451416015625,
-0.01296234130859375,
-0.01407623291015625,
-0.0191650390625,
-0.0095062255859375,
-0.0258941650390625,
0.02349853515625,
0.046295166015625,
-0.07659912109375,
-0.03057861328125,
-0.00521087646484375,
... |
HasturOfficial/adgen | 2023-06-04T12:06:50.000Z | [
"region:us"
] | HasturOfficial | null | null | 1 | 24 | 2023-06-04T12:06:23 | ---
dataset_info:
features:
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 51127446
num_examples: 114599
- name: validation
num_bytes: 473784
num_examples: 1070
download_size: 27853861
dataset_size: 51601230
---
# Dataset Card for "adgen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 457 | [
[
-0.048980712890625,
-0.0162506103515625,
0.0101165771484375,
-0.0013093948364257812,
-0.006256103515625,
-0.0000022649765014648438,
0.01959228515625,
-0.0201568603515625,
0.051910400390625,
0.0245361328125,
-0.0633544921875,
-0.0615234375,
-0.03436279296875,
... |
zachary-shah/musdb18-spec-pix2pix | 2023-06-06T02:55:48.000Z | [
"region:us"
] | zachary-shah | null | null | 0 | 24 | 2023-06-06T02:54:43 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 2923510938.704
num_examples: 31556
download_size: 2839469846
dataset_size: 2923510938.704
---
# Dataset Card for "musdb18-spec-pix2pix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 555 | [
[
-0.0550537109375,
-0.00016570091247558594,
0.021392822265625,
0.017181396484375,
-0.0206146240234375,
-0.0025768280029296875,
0.01568603515625,
-0.0147857666015625,
0.054901123046875,
0.0279541015625,
-0.0631103515625,
-0.0357666015625,
-0.043060302734375,
-... |
Nadav/pixel_glue_qnli | 2023-06-08T10:38:34.000Z | [
"region:us"
] | Nadav | null | null | 0 | 24 | 2023-06-07T16:47:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1826489002.125
num_examples: 104743
- name: validation
num_bytes: 96827557.125
num_examples: 5463
download_size: 1902639822
dataset_size: 1923316559.25
---
# Dataset Card for "pixel_glue_qnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 546 | [
[
-0.02642822265625,
-0.0185089111328125,
0.013397216796875,
0.01178741455078125,
-0.003711700439453125,
0.01181793212890625,
0.0288848876953125,
0.002758026123046875,
0.0704345703125,
0.006771087646484375,
-0.06451416015625,
-0.055755615234375,
-0.02325439453125,... |
TigerResearch/sft_zh | 2023-06-09T12:21:42.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 23 | 24 | 2023-06-09T10:15:22 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调中文sft-zh数据合集
本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_zh')
```
## 文件细分
| 类型 | 语言 | 数据集文件 | 数量
| ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| alpaca 中文 | 中文 | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-alpaca-zh-0.5m.json) | 0.5m |
| 百科问答 | 中文 | [tigerbot-wiki-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-wiki-qa-zh-1k.json) | 1k |
| 名著问答 | 中文 | [tigerbot-book-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-book-qa-1k.json) | 1k |
| 猜谜语 | 中文 | [tigerbot-riddle-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-riddle-qa-1k.json) | 1k |
| 阅读理解 | 中文 | [tigerbot-superclue-c3-zh-5k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-superclue-c3-zh-5k.json) | 5k |
| 问答 | 中文 | [tigerbot-hc3-zh-12k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-hc3-zh-12k.json) | 12k |
| 知乎问答 | 中文 | [tigerbot-zhihu-zh-10k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-zhihu-zh-10k.json) | 10k |
| 1,905 | [
[
-0.03533935546875,
-0.0231170654296875,
0.01010894775390625,
0.0138092041015625,
-0.02862548828125,
0.0008997917175292969,
-0.01007080078125,
-0.0201416015625,
0.049072265625,
0.0308074951171875,
-0.031341552734375,
-0.054718017578125,
-0.028076171875,
0.022... |
KShivendu/wikipedia-1k-cohere-openai-embeddings | 2023-07-20T21:19:35.000Z | [
"language:en",
"license:mit",
"openai",
"cohere",
"wikipedia",
"region:us"
] | KShivendu | null | null | 1 | 24 | 2023-06-09T17:21:14 | ---
language: en
license: mit
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: wiki_id
dtype: int32
- name: views
dtype: float32
- name: paragraph_id
dtype: int32
- name: langs
dtype: int32
- name: cohere
sequence: float32
- name: openai
sequence: float64
splits:
- name: train
num_bytes: 15850870
num_examples: 1000
download_size: 13208079
dataset_size: 15850870
tags:
- openai
- cohere
- wikipedia
---
Smaller version of https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings that includes Cohere as well as OpenAI embeddings (`text-embedding-ada-002`)
100k version of this dataset will be released soon. | 792 | [
[
-0.049041748046875,
-0.0286407470703125,
0.0087432861328125,
0.0006403923034667969,
-0.00910186767578125,
-0.0447998046875,
0.00464630126953125,
-0.0467529296875,
0.07293701171875,
0.0215606689453125,
-0.067626953125,
-0.040374755859375,
-0.032257080078125,
... |
P1ayer-1/eli5 | 2023-06-15T17:02:30.000Z | [
"region:us"
] | P1ayer-1 | null | null | 0 | 24 | 2023-06-15T17:02:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
umarbutler/open-australian-legal-corpus | 2023-11-02T03:38:28.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"size_categories:100K<n<1M",
"source_datasets:Federal Register of Legislation",
"source_datasets:Federal... | umarbutler | null | null | 20 | 24 | 2023-06-25T08:53:25 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: cc-by-4.0
size_categories:
- 100K<n<1M
source_datasets:
- Federal Register of Legislation
- Federal Court of Australia
- NSW Caselaw
- NSW Legislation
- Queensland Legislation
- Western Australian Legislation
- South Australian Legislation
- Tasmanian Legislation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: Open Australian Legal Corpus
license_details: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md
tags:
- legal
language_details: en-AU, en-GB
viewer: false
dataset_info:
config_name: train
features:
- name: version_id
dtype: string
- name: type
dtype: string
- name: jurisdiction
dtype: string
- name: source
dtype: string
- name: citation
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18806163830
num_examples: 219953
download_size: 18879420278
dataset_size: 18806163830
---
<!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_corpus.py --save_info --all_configs`. -->
# **Open Australian Legal Corpus ⚖️**
<a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus" alt="Release"><img src="https://img.shields.io/badge/release-v4.1.0-green"></a>
The Open Australian Legal Corpus is the first and only multijurisdictional open corpus of Australian legislative and judicial documents.
Comprised of 219,953 texts totalling over 55 million lines and 1.3 billion tokens, the Corpus includes every in force statute and regulation in the Commonwealth, New South Wales, Queensland, Western Australia, South Australia, Tasmania and Norfolk Island, in addition to thousands of bills and hundreds of thousands of court and tribunal decisions.
As the largest free and open dataset of its kind to date, the Corpus is intended to progress the burgeoning field of legal AI research in Australia by allowing researchers to pretrain and finetune machine learning models for downstream natural language processing tasks applied to the Australian legal domain such as document classification, summarisation, information retrieval and question answering.
To ensure its accessibility to as wide an audience as possible, the Corpus and all its documents are distributed under permissive licences that allow for both non-commercial and commercial usage (see the [Licence 📄](LICENCE.md)).
Those interested in learning more about the Corpus are encouraged to read Umar Butler's accompanying article, [*How I built the largest open database of Australian law*](https://umarbutler.com/how-i-built-the-largest-open-database-of-australian-law/).
## Statistics 📊
The Corpus is comprised of 219,953 documents, totalling 56,299,882 lines and 1,343,904,041 tokens.
A breakdown of the number of documents by type and source is provided below:
| Source | Primary Legislation | Secondary Legislation | Bills | Decisions | **Total** |
|:--------------------------------|----------------------:|------------------------:|--------:|------------:|--------:|
| Federal Register of Legislation | 3,468 | 25,743 | 7,900 | 0 |**37,111**|
| Federal Court of Australia | 0 | 0 | 0 | 61,455 |**61,455**|
| NSW Caselaw | 0 | 0 | 0 | 110,666 |**110,666**|
| NSW Legislation | 1,429 | 799 | 0 | 0 |**2,228**|
| Queensland Legislation | 564 | 427 | 2,234 | 0 |**3,225**|
| Western Australian Legislation | 813 | 757 | 0 | 0 |**1,570**|
| South Australian Legislation | 555 | 471 | 138 | 0 |**1,164**|
| Tasmanian Legislation | 861 | 1,673 | 0 | 0 |**2,534**|
| **Total** |**7,690**|**29,870**|**10,272**|**172,121**|**219,953**|
## Structure 🗂️
The Corpus is stored in [corpus.jsonl](corpus.jsonl), a json lines file where each line represents a document consisting of six keys:
| Key | Description |
| --- | --- |
| version_id | A unique identifier for the current version of the document. |
| type | The type of the document. Possible values are `primary_legislation`, `secondary_legislation`, `bill` and `decision`. |
| jurisdiction | The jurisdiction of the document. Possible values are `commonwealth`, `new_south_wales`, `queensland`, `western_australia`, `south_australia`, `tasmania` and `norfolk_island`. |
| source | The source of the document. Possible values are `federal_register_of_legislation`, `federal_court_of_australia`, `nsw_caselaw`, `nsw_legislation`, `queensland_legislation`, `western_australian_legislation`, `south_australian_legislation` and `tasmanian_legislation`. |
| citation | The title of the document with, in the case of legislation and bills, an abbreviated form of the document's jurisdiction enclosed in parentheses appended. |
| url | A hyperlink to the document. |
| text | The UTF-8 encoded text of the document. |
## Collection 📥
Documents were sourced from the [Federal Register of Legislation](https://www.legislation.gov.au/), [Federal Court of Australia](https://www.fedcourt.gov.au/digital-law-library/judgments/search), [NSW Caselaw](https://www.caselaw.nsw.gov.au/), [NSW Legislation](https://legislation.nsw.gov.au/), [Queensland Legislation](https://www.legislation.qld.gov.au/), [Western Australian Legislation](https://www.legislation.wa.gov.au/), [South Australian Legislation](https://www.legislation.sa.gov.au/) and [Tasmanian Legislation](https://www.legislation.tas.gov.au/) databases.
[`Inscriptis`](https://github.com/weblyzard/inscriptis) was used to extract the text of documents stored as HTML, [`pdfplumber`](https://github.com/jsvine/pdfplumber) for PDFs, [`striprtf`](https://github.com/joshy/striprtf) for RTFs and finally [`mammoth`](https://github.com/mwilliamson/python-mammoth) was used to convert DOCXs to HTML before also extracting their text with `Inscriptis`.
The below table provides the date each source was last updated and the types of documents collected:
| Source | Date | Documents |
| --- | --- | --- |
| [Federal Register of Legislation](https://www.legislation.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force acts and the Constitution (primary legislation);</li> <li>The most recent versions of all in force legislative instruments, notifiable instruments, administrative arrangements orders and prerogative instruments (secondary legislation); and</li> <li>The as made versions of all bills.</li></ul> |
| [Federal Court of Australia](https://www.fedcourt.gov.au/digital-law-library/judgments/search) | 29 October 2023 | <ul><li>All decisions of the Federal Court of Australia, Industrial Relations Court of Australia, Australian Competition Tribunal, Copyright Tribunal, Defence Force Discipline Appeal Tribunal, Federal Police Disciplinary Tribunal, Trade Practices Tribunal and Supreme Court of Norfolk Island.</li></ul> |
| [NSW Caselaw](https://www.caselaw.nsw.gov.au/) | 2 November 2023 | <ul><li>All decisions of the NSW Children's Court, Compensation Court, Court of Appeal, Court of Criminal Appeal, District Court, Drug Court, Industrial Relations Commission, Land and Environment Court, Local Court, Supreme Court, Administrative Decisions Tribunal, Civil and Administrative Tribunal, Dust Diseases Tribunal, Equal Opportunity Tribunal, Fair Trading Tribunal, Legal Services Tribunal, Medical Tribunal and Transport Appeals Boards.</li></ul> |
| [NSW Legislation](https://legislation.nsw.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force public and private acts (primary legislation); and</li> <li>The most recent versions of all in force statutory instruments and environmental planning instruments (secondary legislation).</li></ul> |
| [Queensland Legislation](https://www.legislation.qld.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force acts (primary legislation);</li> <li>The most recent versions of all in force statutory instruments (secondary legislation); and</li> <li>The as introduced versions of all bills.</li></ul> |
| [Western Australian Legislation](https://www.legislation.wa.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force subsidiary legislation (secondary legislation).</li></ul> |
| [South Australian Legislation](https://www.legislation.sa.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force proclamations, policies and regulations (secondary legislation).</li></ul> |
| [Tasmanian Legislation](https://www.legislation.tas.gov.au/) | 29 October 2023 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force statutory rules (secondary legislation).</li></ul> |
The code used to create and update the Corpus can be found [here](https://github.com/umarbutler/open-australian-legal-corpus-creator).
Those interested in learning more about how the Corpus was built are encouraged to read Umar Butler's accompanying article, [*How I built the largest open database of Australian law*](https://umarbutler.com/how-i-built-the-largest-open-database-of-australian-law/).
## Changelog 🔄
All notable changes to the Corpus are documented in its [Changelog 🔄](CHANGELOG.md).
This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Licence 📄
As a work constituting a collection of documents that have been cleaned, structured, annotated and otherwise processed, the Corpus itself is licensed under the [Creative Commons Attribution 4.0 International Licence](https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, on the condition that you give appropriate credit to the original author and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Documents contained within the Corpus are distributed under relatively equally permissive licences that allow for both non-commerical and commerical use and are available in the complete version of the Corpus' licence [here](LICENCE.md).
## Citation 🔖
If you've relied on the Corpus for your work, please cite:
```bibtex
@misc{butler-2023-open-australian-legal-corpus,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal Corpus},
publisher = {Hugging Face},
version = {4.1.0},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus}
}
```
## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the [Federal Register of Legislation](https://www.legislation.gov.au/), [Federal Court of Australia](https://www.fedcourt.gov.au/digital-law-library/judgments/search), [NSW Caselaw](https://www.caselaw.nsw.gov.au/), [NSW Legislation](https://legislation.nsw.gov.au/), [Queensland Legislation](https://www.legislation.qld.gov.au/), [Western Australian Legislation](https://www.legislation.wa.gov.au/), [South Australian Legislation](https://www.legislation.sa.gov.au/) and [Tasmanian Legislation](https://www.legislation.tas.gov.au/) for all granting him permission to scrape their data.
The author also acknowledges the creators of the many Python libraries relied upon in the creation of the Corpus, as well as the makers of the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law), which served as a great source of inspiration for this project.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs. | 12,681 | [
[
-0.0313720703125,
-0.052398681640625,
0.045623779296875,
0.0099029541015625,
-0.0159912109375,
-0.02679443359375,
-0.01125335693359375,
-0.00862884521484375,
0.0208740234375,
0.07855224609375,
-0.016143798828125,
-0.051849365234375,
-0.04046630859375,
0.0273... |
joonhok-exo-ai/korean_law_open_data_precedents | 2023-07-05T08:43:35.000Z | [
"size_categories:10K<n<100K",
"language:ko",
"license:openrail",
"legal",
"region:us"
] | joonhok-exo-ai | null | null | 2 | 24 | 2023-06-29T12:51:31 | ---
language:
- ko
tags:
- legal
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [김준호](mailto:joonhok@smartfitnow.com)
### Dataset Summary
[법제처 국가법령 공동활용 센터](https://open.law.go.kr/LSO/main.do)에서 제공되는 전체 판례 데이터셋입니다.
## Dataset Structure
### Data Instances
개별 데이터의 모양은 아래와 같습니다.
판례 본문 조회 API의 출력 결과 필드를 대체로 따랐으나, 그 중 "법원종류코드" 와 "사건종류코드"는 제외했고 "판시유형" 필드는 실제 응답에서는 "판결유형"이어서 실제 응답 값대로 사용하였습니다. 마지막으로 "판례내용" 필드는 "전문" 으로 대체하였습니다.
```
{
'판례정보일련번호': 101924
'사건명': '손해배상'
'사건번호': '85다카1594'
'선고일자': 19860722,
'선고': '선고'
'법원명': '대법원'
'사건종류명': '민사'
'판결유형': '판결'
'판시사항': '가. 미성년자가 부모의 개호를 받을 수 있는 경우, 손해로서의 개호인 비용 / 나. 호프만식계산법에 의한 일실이익 산정의 적부 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우, 단리연금 현가율이 20을 넘는 경우의 일실이익 산정방법'
'판결요지': '가. 신체의 부자유로 인하여 개호인의 조력을 받을 필요가 있는 경우에는 비록 피해자가 미성년자이고 그의 부모가 개호를 할 수 있는 형편에 있다 하더라도 반드시 그 부모의 개호를 받아야 한다고 단정할 수 없음은 물론, 가사 그 부모의 개호를 받게 된다고 하더라도 이로 인하여 피해자가 입는 손해는 특별한 사정이 없는 한 통상의 개호인 비용 전액이다. 나. 호프만식계산법에 의하여 중간이자를 공제하여 장래의 일실이익의 현가를 산정하는 것은 위법한 것이 아니다. 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우에 단리연금현가율이 20을 넘는 경우에는 그 단리연금현가율을 그대로 적용하여 그 현가를 산정하게 되면 현가로 받게 되는 금액의 이자가 매월 입게 되는 손해액보다 많게 되어 손해액보다 더 많은 금원을 배상하게 되는 불합리한 결과를 가져오게 되므로 그 단리연금현가율이 결과적으로 20을 넘는 경우에 있어서는 그 수치표상의 단리연금현가율이 얼마인지를 불문하고 모두 20을 적용 계산함으로써 피해자가 과잉배상을 받는 일이 없도록 하여야 한다.'
'참조조문': '가.나.다. 민법 제763조'
'참조판례': '나. 대법원 1981.9.22 선고 81다588 판결, 1985.10.22 선고 85다카819 판결 / 다. 대법원 1985.10.22 선고 85다카819 판결, 1986.3.25 선고 85다카2375 판결'
'판결유형': '판결'
'전문': '【원고, 피상고인】 (...이하 생략...)'
}
```
### Data Fields
다른 필드들은 특별한 설명이 필요 없겠으나, "선고일자" 필드의 값은 스트링이 아니고 숫자입니다. 또, 일부 데이터의 "선고일자" 필드 값에는 월, 일 정보가 누락되고 연 정보만 남아 있어서 자리수가 4자리인 경우도 있습니다.
그리고 "사건명" 등 일부 필드는 값이 없는 경우도 있으니 참고 바랍니다.
## Dataset Creation
### Curation Rationale
이 데이터셋의 판례 데이터들은 공동활용 API를 통해서도 접근 가능하지만,
1. API 방식으로는 전체 데이터를 순회하는 것이 까다롭고
2. API 응답 데이터를 매번 파싱하고 전처리하는 번거로움이 있으며
3. 일부 API 응답 데이터에 있는 오류를 미리 정제하기 위하여
이 데이터셋을 만들게 되었습니다.
### Source Data
#### Initial Data Collection and Normalization
이 데이터셋은 국가법령 공동활용 센터의 "판례 목록 조회 API"와 "판례 본문 조회 API"를 이용하여 데이터를 수집하였습니다.
먼저 판례 목록 조회 API를 호출해 판례정보 일련번호들을 수집한 뒤, 각각의 일련번호로 판례 본문 조회 API를 호출하여 판례 데이터를 수집하였습니다.
판례 본문을 조회할 때는 XML과 HTML 두 가지 형식으로 요청할 수 있는데, 데이터의 완결성 검증 및 정제 작업을 위해
전체 데이터에 대해 두 가지 형식으로 모두 요청을 보낸 뒤 두 응답 데이터를 비교해 보았고, 일부 데이터에서 요청 형식에 따라
데이터 값이 다른 것을 확인하였습니다.
예를 들어 판례정보 일련번호가 152179인 판례 데이터를 XML과 HTML 형식으로 요청했을 때 "전문" 중 "【원심판결】" 부분은 각각 아래와 같습니다.
XML 형식으로 요청했을 때:
```
"1. 서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결(이하 ‘제1원심판결’이라고 한다) / 2. 서울중앙지방법원 2009. 5. 8. 선고 2009고합416 판결(이하 ‘제2원심판결’이라고 한다)"
```
HTML 형식으로 요청했을 때:
```
서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결
```
이렇게 요청 형식에 따라 "【원심판결】" 부분이 다른 데이터가 수십건 있었고 이 데이터셋에는 더 많은 정보를 담고 있는 데이터로(위 사례에서는 XML 형식 데이터) 사용하였습니다.
그 밖에도 두 가지 형식 모두에서 데이터 자체에 잘못된 데이터가 포함되는 등(법령 하이퍼링크 포맷이 깨진 경우, 익명화 포맷이 잘못된 경우 등) 오류가 있는 경우들이
몇 건 있었는데 이 데이터들은 수작업으로 수정하였습니다.
마지막으로 일부 데이터는 이미지를 포함하고 있는 경우가 있었는데 이미지들은 전부 생략하고 텍스트 부분만 포함하였습니다.
본문 데이터에 오류가 있어 수작업으로 수정한 데이터 목록: 212537, 188351, 188019, 200567
이미지를 포함하고 있는 데이터 목록:
184135,
182916,
186027,
185375,
184151,
184597,
186156,
184655,
185123,
198440,
197577
## Additional Information
### Dataset Curators
김준호([링크드인](https://www.linkedin.com/in/joonho-kim/)): 이 데이터셋은 인공지능 법률 서비스를 만들고 있는 제가 직접 필요해서 만들게 되었습니다.
### Contributions
혹시 데이터 중 잘못된 부분을 발견하신 분은 [joonhok@smartfitnow.com](mailto:joonhok@smartfitnow.com)로 연락 주시면
확인 후 반영하겠습니다. | 3,509 | [
[
-0.051300048828125,
-0.033599853515625,
0.0180511474609375,
0.03155517578125,
-0.0264892578125,
0.00106048583984375,
0.0263214111328125,
-0.019622802734375,
0.0382080078125,
0.0216827392578125,
-0.035186767578125,
-0.045928955078125,
-0.040679931640625,
0.01... |
causal-lm/instructions-ko | 2023-07-24T05:54:16.000Z | [
"language:ko",
"region:us"
] | causal-lm | null | null | 1 | 24 | 2023-07-02T06:42:03 | ---
language: ko
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 71817534.51580903
num_examples: 112104
- name: validation
num_bytes: 8026314.24732017
num_examples: 12429
download_size: 43862664
dataset_size: 79843848.7631292
---
# Dataset Card for "instructions-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.032745361328125,
-0.025543212890625,
0.037689208984375,
0.02301025390625,
-0.0194549560546875,
-0.01209259033203125,
0.018280029296875,
0.004581451416015625,
0.048095703125,
0.04541015625,
-0.0810546875,
-0.06524658203125,
-0.033172607421875,
-0.017395019... |
beyond/rlhf-reward-single-round-trans_chinese | 2023-07-05T13:03:15.000Z | [
"region:us"
] | beyond | null | null | 24 | 24 | 2023-07-05T13:02:55 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 12139022
num_examples: 19862
- name: test
num_bytes: 3117841
num_examples: 4996
download_size: 10699367
dataset_size: 15256863
---
# Dataset Card for "rlhf-reward-single-round-trans_chinese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 519 | [
[
-0.01763916015625,
-0.0237274169921875,
-0.00089263916015625,
0.0259857177734375,
-0.03302001953125,
-0.00472259521484375,
0.01053619384765625,
-0.0171661376953125,
0.061553955078125,
0.041015625,
-0.07818603515625,
-0.05548095703125,
-0.03289794921875,
-0.0... |
iamkaikai/amazing_logos | 2023-07-11T20:28:47.000Z | [
"license:unknown",
"region:us"
] | iamkaikai | null | null | 0 | 24 | 2023-07-06T19:53:04 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 57955506.86
num_examples: 6866
download_size: 53988605
dataset_size: 57955506.86
---
Super high quality logos from Logobook.com
| 291 | [
[
-0.02642822265625,
0.004852294921875,
0.0221710205078125,
0.01092529296875,
-0.036376953125,
0.03448486328125,
0.0090179443359375,
-0.05303955078125,
0.0238189697265625,
0.0369873046875,
-0.04693603515625,
-0.0188140869140625,
-0.044158935546875,
0.015510559... |
Amod/hair_medical_sit | 2023-07-20T19:30:20.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"medical",
"region:us"
] | Amod | null | null | 0 | 24 | 2023-07-13T17:56:40 | ---
license: openrail
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- medical
size_categories:
- n<1K
---
# Dataset Description
- **Point of Contact:** [amod@silverlineit.co]
## Dataset Summary
This dataset contains information about common hair related diseases. It includes the disease name, the medicine used to treat the disease, the duration of treatment, the severity of the disease, and the common side effects of each medication.
## Supported Tasks and Leaderboards
This dataset supports tasks like medication recommendation, disease diagnosis based on symptoms, etc.
## Languages
The text in the dataset is in English. The text is medical terms and the names of the diseases, medications, and side effects are internationally recognized terms.
# Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
## Data Instances
A data instance has the following structure:
\```json
{
"Hair Diseases": "Alopecia Areata",
"Medicine": "Minoxidil solution",
"Duration": "12 months",
"Severity": "Severe",
"Side Effects": "Scalp irritation, Unwanted hair growth, Dizziness"
}
\```
## Data Fields
- `Hair Diseases`: The name of the hair related disease.
- `Medicine`: The medication used to treat the disease.
- `Duration`: The duration of treatment.
- `Severity`: The severity of the disease.
- `Side Effects`: A list of common side effects of the medication.
## Data Splits
The dataset has not been split into train, test, and validation sets.
# Dataset Creation
## Curation Rationale
The dataset was created to assist in medical research and to aid in disease diagnosis and treatment recommendation.
## Source Data
### Initial Data Collection and Normalization
The dataset was collected from various medical resources and compiled into a structured CSV file.
### Who are the source language producers?
The original language data was produced by medical professionals.
## Annotations
The dataset does not contain any annotations.
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could be used to create systems that provide treatment recommendations for common hair related diseases, helping to improve healthcare outcomes.
## Discussion of Biases
The dataset does not contain any explicit biases as it is based on medical facts. However, it is limited to common hair diseases and their treatments and does not include all possible diseases or treatments.
## Other Known Limitations
The dataset only includes the most common side effects of the medications and does not cover all potential side effects.
# Additional Information
## Dataset Curators
The dataset was curated by [Amod](https://huggingface.co/Amod).
## Citation Information
To the best of our knowledge, this dataset has not been cited in any publications. | 2,858 | [
[
-0.020111083984375,
-0.062744140625,
0.0275115966796875,
0.0145111083984375,
-0.0034198760986328125,
-0.0085296630859375,
-0.00341796875,
-0.03076171875,
0.0809326171875,
0.063232421875,
-0.060302734375,
-0.0999755859375,
-0.047393798828125,
0.0125732421875,... |
dim/mt_bench_ru | 2023-07-25T13:19:39.000Z | [
"region:us"
] | dim | null | null | 0 | 24 | 2023-07-24T00:10:25 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turns
sequence: string
- name: turns_ru
sequence: string
splits:
- name: train
num_bytes: 95817
num_examples: 80
download_size: 55916
dataset_size: 95817
---
# Dataset Card for "mt_bench_ru"
Автоматически переведенный датасет при помощи facebook/wmt21-dense-24-wide-en-x и потом поправленный мной лично в некоторых местах.
Если вы хотите исправить данный датасет, то вы можете использовать данную гугл таблицу https://docs.google.com/spreadsheets/d/1C2znaufnvMU2PyqaDKMTrRKPvS60xtisdcRSlqQGUUs/edit?usp=sharing | 654 | [
[
-0.023834228515625,
-0.037994384765625,
0.01490020751953125,
0.0145263671875,
-0.052276611328125,
-0.006900787353515625,
0.00002396106719970703,
0.004367828369140625,
0.023773193359375,
0.017578125,
-0.07562255859375,
-0.06689453125,
-0.036712646484375,
0.00... |
Yukang/Pile-subset | 2023-08-24T09:56:35.000Z | [
"region:us"
] | Yukang | The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together. | @misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
year={2020},
eprint={2101.00027},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 24 | 2023-08-01T04:18:00 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
loremipsum3658/jur-entailment | 2023-08-18T11:48:07.000Z | [
"region:us"
] | loremipsum3658 | null | null | 0 | 24 | 2023-08-18T11:46:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: ementa1
dtype: string
- name: ementa2
dtype: string
- name: similarity
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 39538896
num_examples: 17448
- name: test
num_bytes: 8539490
num_examples: 3739
- name: validation
num_bytes: 8441857
num_examples: 3739
download_size: 30802928
dataset_size: 56520243
---
# Dataset Card for "jur-entailment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 786 | [
[
-0.036102294921875,
-0.045013427734375,
0.0180206298828125,
0.006011962890625,
-0.01468658447265625,
-0.0105743408203125,
0.004383087158203125,
-0.00902557373046875,
0.06707763671875,
0.0484619140625,
-0.041778564453125,
-0.049530029296875,
-0.039642333984375,
... |
qgyd2021/chinese_ner_sft | 2023-10-07T11:36:27.000Z | [
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"ner",
"region:us"
] | qgyd2021 | null | @dataset{chinese_ner_sft,
author = {Xing Tian},
title = {chinese_ner_sft},
month = sep,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | 13 | 24 | 2023-09-03T01:48:44 | ---
task_categories:
- token-classification
- question-answering
- text-generation
- text2text-generation
language:
- zh
tags:
- ner
size_categories:
- 100M<n<1B
license: apache-2.0
---
## 中文实体识别指令数据集
收集开源的实体识别数据集, 将其制作为 sft 数据集用于 LLM 微调.
该数据集的目的是构建通用实体识别的LLM研究.
数据集分为三大类:
`{dataset_name}`, `{dataset_name}_template`, `{dataset_name}_prompt`.
* `{dataset_name}`: 为对应的实体识别数据集.
* `{dataset_name}_template`: 是为各数据集编写的 prompt 模板, 因为各数据集的主题不同, 所以模板分别编写会更加准确.
* `{dataset_name}_prompt`: 是根据 `{dataset_name}` 和 `{dataset_name}_template` 合成的 prompt 数据集. 由于是动态生成的 huggingface 可能无法展示, 以下是一些数据示例.
<details>
<summary>数据示例展开查看</summary>
<pre><code>
{
"prompt": "在做手机智能助手上, 你需要识别用户话语中的关键实体, 实体类型包括:\n联系人姓名,场景,主旋律,乐器名称,曲风,手机号码,语言,时代,目的地,流行榜单,情绪,出发地点,曲名\n\n用户的话语为:\n来一首周华健的花心\n\n请按以下格式输出实体:\nText: 实体的原始文本, 从原句子中截取.\nLabel: 实体类型, 以上给出的实体类型之一.\n\nTips:\n1. 可同时输出多个实体.\n2. 没有实体时可回答: 找不到任何相关的实体.",
"response": "Text: 花心\nLabel: 曲名"
}
{
"prompt": "在手机或电脑助手的场景中, 我们需要识别用户指令中的实体.\n请给出以下句子中所有的主旋律,情感,时期,地址,曲子实体.\n------------\n播放我们都一样\n------------\n\n注意:\n1. 输出时如果有多个实体, 每行一个实体.\n\noutput:",
"response": "曲子: 我们都一样"
}
{
"prompt": "在做手机智能助手上, 你需要识别用户话语中的关键实体, 实体类型包括:\n流行榜单,曲名,乐器,年代,情感,电话号码,情景,语言,地址,音乐风格,歌手,音乐主题,联系人,出发地点,地址\n\n用户的话语为:\n播放不一样来一首我们不一样\n\n请按以下格式输出实体:\nText: 实体的原始文本, 从原句子中截取.\nLabel: 实体类型, 以上给出的实体类型之一.\n\nTips:\n1. 可同时输出多个实体.\n2. 没有实体时可回答: 找不到任何相关的实体.",
"response": "Text: 不一样\nLabel: 曲子\nText: 我们不一样\nLabel: 曲名"
}
{
"prompt": "我们在做一个手机智能助手, 它需要识别用户的指令中的实体, 包括:\n目的地,主题,时代,电话号码,乐器,歌手,音乐风格,始发地点,语言,联系人,曲子,情绪\n\n现在请你给出以下句子中所包含的实体.\n到信阳市汽车配件城\n\n请按以下格式输出:\nEntityText: 实体的原始文本.\nEntityLabel: 实体标签, 必须是以上给定的其中之一.\n\n可同时输出多个实体.",
"response": "EntityText: 信阳市汽车配件城\nEntityLabel: 目的地"
}
{
"prompt": "我们在做一个手机智能助手, 它需要识别用户的指令中的实体, 包括:\n地址,出发地点,时代,演唱者,情绪,音乐主题,热门榜单,电话号码,联系人姓名,语种\n\n现在请你给出以下句子中所包含的实体.\n米静\n\n请按以下格式输出:\nEntityText: 实体的原始文本.\nEntityLabel: 实体标签, 必须是以上给定的其中之一.\n\n可同时输出多个实体.",
"response": "没有从`米静`中找到任何关于`地址,出发地点,时代,演唱者,情绪,音乐主题,热门榜单,电话号码,联系人姓名,语种`的实体. "
}
{
"prompt": "在手机或电脑助手的场景中, 我们需要识别用户指令中的实体.\n请给出以下句子中所有的情绪,场景,联系人,歌曲,年代,地址实体.\n------------\n信阳汽配城\n------------\n\n注意:\n1. 输出时如果有多个实体, 每行一个实体.\n\noutput:",
"response": "地址: 信阳汽配城"
}
</code></pre>
</details>
提示:
* 可能是数据太多, `subset` 太多, `Dataset Viewer` 总是无效.
你可以查看文件 `chinese_ner_sft.py` 中 `_entity_urls`, `_template_urls`, `_prompt_urls` 的 `keys` 来判断哪些 `subset` 是可用的.
**欢迎在 Community 中分享你的 prompt 范式, 我会添加**.
数据集从网上收集整理如下:
| 数据 | 原始数据/项目地址 | 样本个数 | 实体类型 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| CMeEE | [CBLUE](http://www.cips-chip.org.cn/2021/CBLUE); [天池下载](https://tianchi.aliyun.com/dataset/95414) | 20000 | 儿科疾病, 身体部位, 临床表现, 医疗程序, 等 9 大类医学实体 | 医学实体识别任务 | [nlhappy/CMeEE](https://huggingface.co/datasets/nlhappy/CMeEE) [Rosenberg/CMeEE-V2](https://huggingface.co/datasets/Rosenberg/CMeEE-V2) |
| CCKS2019_task1 | [Yidu-S4K](http://openkg.cn/dataset/yidu-s4k) | 1379 | 解剖部位, 手术, 疾病和诊断, 药物, 实验室检验, 影像检查 | CCKS2019面向中文电子病历的命名实体识别数据集 | |
| CLUENER2020 | [CLUE](https://www.cluebenchmarks.com/introduce.html); [CLUENER](https://storage.googleapis.com/cluebenchmark/tasks/cluener_public.zip) | 12091 | 游戏, 组织, 政府, 电影, 人名, 书籍, 公司, 场景, 职位, 地址 | CLUENER2020数据集 | |
| MSRA | [MSRA](https://www.msra.cn/) | 48442 | 地址, 组织, 人名 | MSRA微软亚洲研究院开源命名实体识别数据集 | [doushabao4766/msra_ner_k_V3_wc_bioes](https://huggingface.co/datasets/doushabao4766/msra_ner_k_V3_wc_bioes) |
| NLPCC2018_task4 | [NLPCC2018](http://tcci.ccf.org.cn/conference/2018/taskdata.php); [NLPCC2018_task4](http://tcci.ccf.org.cn/conference/2018/dldoc/trainingdata04.zip) | 21352 | 歌手, 歌曲, 主题, 情感, 风格, 目的地, 电话号码, 乐器, 聊系人, 年龄, 热门列表, 自定义目的地, 语种, 场景, 出发地 | 任务型对话系统数据数据集 | |
| CCFBDCI | [CCFBDCI填写申请表后可下载](https://www.datafountain.cn/competitions/510/datasets) | 15723 | LOC、GPE、ORG和PER | 中文命名实体识别算法鲁棒性评测数据集 | |
| MMC | [MMC](https://tianchi.aliyun.com/competition/entrance/231687/information) [MMC数据集](https://aistudio.baidu.com/datasetdetail/146995) | 3498 | 实体类型 | 瑞金医院MMC人工智能辅助构建知识图谱大赛数据集 | |
| WeiBo | [WeiBo](https://github.com/hltcoe/golden-horse/tree/master) | 1890 | LOC.NAM、LOC.NOM、PER.NAM、ORG.NOM、ORG.NAM、GPE.NAM和PER.NOM | 社交媒体中文命名实体识别数据集 | |
| ECommerce | [ECommerce](https://github.com/allanj/ner_incomplete_annotation/tree/master) | 7998 | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 | |
| YouKu | [YouKu](https://github.com/allanj/ner_incomplete_annotation/tree/master) | | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 | |
| FinanceSina | [FinanceSina](https://github.com/jiesutd/LatticeLSTM/tree/master) | 1579 | LOC、GPE、ORG和PER | 新浪财经爬取中文命名实体识别数据集 | |
| Resume | [Resume](https://github.com/jiesutd/LatticeLSTM/tree/master/ResumeNER) | 4761 | NAME、EDU、LOC、ORG、PRO、TITLE、CONT和RACE | 中国股市上市公司高管的简历 | |
| Bank | [Bank](https://www.heywhale.com/mw/dataset/617969ec768f3b0017862990/file) | 10000 | BANK、COMMENTS_ADJ、COMMENTS_N和PRODUCT | 银行借贷数据数据集 | |
| DLNER | [DLNER](https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset/tree/master) | 28897 | Location、Thing、Abstract、Organization、Metric、Time、Physical、Person和Term | 语篇级命名实体识别数据集 | |
参考文档:
[提示工程指南](https://www.promptingguide.ai/zh)
<details>
<summary>参考的数据来源,展开查看</summary>
<pre><code>
[ttxy/cn_ner](https://huggingface.co/datasets/ttxy/cn_ner)
[xusenlin/clue-ner](https://huggingface.co/datasets/xusenlin/clue-ner)
[xusenlin/people-daily-ner](https://huggingface.co/datasets/xusenlin/people-daily-ner)
[peoples_daily_ner](https://huggingface.co/datasets/peoples_daily_ner)
[weibo_ner](https://huggingface.co/datasets/weibo_ner)
[Rosenberg/weibo_ner](https://huggingface.co/datasets/Rosenberg/weibo_ner)
[OneFly/NER](https://huggingface.co/datasets/OneFly/NER)
[djagatiya/ner-ontonotes-v5-eng-v4](https://huggingface.co/datasets/djagatiya/ner-ontonotes-v5-eng-v4)
[Adapting/chinese_biomedical_NER_dataset](https://huggingface.co/datasets/Adapting/chinese_biomedical_NER_dataset)
[nlhappy/CLUE-NER](https://huggingface.co/datasets/nlhappy/CLUE-NER)
[ttxy/resume_ner](https://huggingface.co/datasets/ttxy/resume_ner)
[doushabao4766/ccks_2019_ner_k_V3_wc](https://huggingface.co/datasets/doushabao4766/ccks_2019_ner_k_V3_wc)
</code></pre>
</details>
| 6,149 | [
[
-0.037353515625,
-0.053436279296875,
0.0156097412109375,
0.038482666015625,
-0.0296630859375,
-0.0189666748046875,
-0.015472412109375,
-0.035919189453125,
0.05706787109375,
0.0171356201171875,
-0.048065185546875,
-0.058563232421875,
-0.028076171875,
0.019714... |
benhachem/KHATT | 2023-09-12T13:05:06.000Z | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:ar",
"OCR",
"Optical Character Recognition ",
"Arabic OCR",
"arabic ",
"ocr",
"Textline images",
"region:us"
] | benhachem | KHATT (KFUPM Handwritten Arabic TexT) database is a database of unconstrained handwritten Arabic Text written by 1000 different writers. This research database’s development was undertaken by a research group from KFUPM, Dhahran, S audi Arabia headed by Professor Sabri Mahmoud in collaboration with Professor Fink from TU-Dortmund, Germany and Dr. Märgner from TU-Braunschweig, Germany. | @article{Pattern Recognition,
Author = {bri A. Mahmoud, Irfan Ahmad, Wasfi G. Al-Khatib, Mohammad Alshayeb, Mohammad Tanvir Parvez, Volker Märgner, Gernot A. Fink},
Title = { {KHATT: An Open Arabic Offline Handwritten Text Database} },
Year = {2013},
doi = {10.1016/j.patcog.2013.08.009},
} | 0 | 24 | 2023-09-11T12:01:17 | ---
task_categories:
- image-to-text
language:
- ar
tags:
- OCR
- 'Optical Character Recognition '
- Arabic OCR
- 'arabic '
- ocr
- Textline images
size_categories:
- 1K<n<10K
---
# KFUPM Handwritten Arabic TexT (KHATT) database
### Version 1.0 (September 2012 Release)
The database contains handwritten Arabic text images and its ground-truth developed for
research in the area of Arabic handwritten text. It contains the line images and their ground-truth. It was used for the pilot experimentation as reported in the paper: <ins> S. A. Mahmoud, I. Ahmad, M. Alshayeb, W. G. Al-Khatib, M. T. Parvez, G. A. Fink, V. Margner, and H. EL Abed, “KHATT: Arabic Offline
Handwritten Text Database </ins>, In Proceedings of the 13th International Conference on Frontiers in Handwriting Recognition (ICFHR 2012), Bari, Italy, 2012, pp. 447-452, IEEE Computer Society.
| 865 | [
[
-0.0127716064453125,
-0.04986572265625,
0.041046142578125,
0.0170135498046875,
-0.03411865234375,
0.00318145751953125,
0.002803802490234375,
-0.037384033203125,
0.0060577392578125,
0.057769775390625,
-0.0411376953125,
-0.0655517578125,
-0.046112060546875,
-0... |
librarian-bot/librarian-bot-stats | 2023-10-24T01:24:42.000Z | [
"region:us"
] | librarian-bot | null | null | 0 | 24 | 2023-09-11T14:56:07 | ---
dataset_info:
features:
- name: createdAt
dtype: timestamp[us]
- name: pr_number
dtype: int64
- name: status
dtype: large_string
- name: repo_id
dtype: large_string
- name: type
dtype: large_string
- name: isPullRequest
dtype: bool
splits:
- name: train
num_bytes: 1116096
num_examples: 12758
download_size: 409271
dataset_size: 1116096
---
# Dataset Card for "librarian-bot-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.03729248046875,
-0.0191802978515625,
0.01273345947265625,
-0.0015668869018554688,
-0.00748443603515625,
-0.01154327392578125,
0.02691650390625,
0.010223388671875,
0.047332763671875,
0.04034423828125,
-0.05877685546875,
-0.047027587890625,
-0.025238037109375,
... |
DopeorNope/combined | 2023-09-28T03:32:25.000Z | [
"region:us"
] | DopeorNope | null | null | 0 | 24 | 2023-09-28T03:25:58 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 36438102
num_examples: 27085
download_size: 19659282
dataset_size: 36438102
---
# Dataset Card for "combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 430 | [
[
-0.044891357421875,
-0.01239776611328125,
0.00765228271484375,
0.0180511474609375,
-0.0265655517578125,
0.014923095703125,
0.01024627685546875,
-0.027130126953125,
0.06610107421875,
0.04180908203125,
-0.057159423828125,
-0.0438232421875,
-0.040130615234375,
... |
sayan1101/llama-2-13b-subjectfinetune-grammar | 2023-10-03T12:22:56.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 24 | 2023-09-30T12:17:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 1250979.4995054402
num_examples: 4549
- name: test
num_bytes: 139150.50049455985
num_examples: 506
download_size: 447422
dataset_size: 1390130.0
---
# Dataset Card for "llama-2-13b-subjectfinetune-grammar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.01552581787109375,
-0.0262603759765625,
0.02606201171875,
0.04595947265625,
-0.0243377685546875,
-0.0017251968383789062,
-0.0036716461181640625,
-0.005435943603515625,
0.040191650390625,
0.0304412841796875,
-0.072021484375,
-0.0596923828125,
-0.05059814453125... |
Dloring1/Mini-10K-Recipes | 2023-10-02T21:40:08.000Z | [
"region:us"
] | Dloring1 | null | null | 0 | 24 | 2023-10-02T21:35:50 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 7307080.393135772
num_examples: 10000
download_size: 3870373
dataset_size: 7307080.393135772
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mini-10K-Recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 466 | [
[
-0.038726806640625,
-0.0183258056640625,
0.018798828125,
0.01519775390625,
0.004688262939453125,
-0.005840301513671875,
0.01580810546875,
-0.0037784576416015625,
0.07781982421875,
0.044219970703125,
-0.06634521484375,
-0.0433349609375,
-0.041534423828125,
-0... |
shossain/govreport-qa-5-4096 | 2023-10-03T19:36:20.000Z | [
"region:us"
] | shossain | null | null | 0 | 24 | 2023-10-03T18:23:54 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 266300
num_examples: 5
download_size: 71798
dataset_size: 266300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 528 | [
[
-0.034332275390625,
0.0024471282958984375,
0.0288238525390625,
0.015869140625,
-0.017913818359375,
-0.006122589111328125,
0.03424072265625,
-0.0125579833984375,
0.048614501953125,
0.03790283203125,
-0.044921875,
-0.05621337890625,
-0.02880859375,
-0.00627517... |
AlanRobotics/lima-processed | 2023-10-03T20:49:49.000Z | [
"region:us"
] | AlanRobotics | null | null | 0 | 24 | 2023-10-03T20:49:10 | ---
dataset_info:
features:
- name: user
dtype: string
- name: assistant
dtype: string
splits:
- name: train
num_bytes: 2868376
num_examples: 1030
download_size: 1682336
dataset_size: 2868376
---
# Dataset Card for "lima-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 394 | [
[
-0.0300750732421875,
-0.031982421875,
0.030853271484375,
0.037628173828125,
-0.033172607421875,
-0.0091705322265625,
0.02490234375,
-0.023193359375,
0.072509765625,
0.053924560546875,
-0.06341552734375,
-0.054595947265625,
-0.06317138671875,
-0.0088577270507... |
Intuit-GenSRF/jquiros-suicide | 2023-10-05T00:50:50.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 24 | 2023-10-05T00:49:36 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 165623664
num_examples: 232074
download_size: 100436023
dataset_size: 165623664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jquiros-suicide"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.0280303955078125,
-0.019622802734375,
0.0273895263671875,
0.016204833984375,
-0.00005882978439331055,
0.011474609375,
0.00919342041015625,
0.0041351318359375,
0.060821533203125,
0.01983642578125,
-0.06549072265625,
-0.053375244140625,
-0.041839599609375,
... |
Intuit-GenSRF/toxigen-train-annotated | 2023-10-05T01:50:15.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 24 | 2023-10-05T01:50:13 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 951313
num_examples: 8960
download_size: 553547
dataset_size: 951313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-train-annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.039031982421875,
0.00580596923828125,
0.0225982666015625,
0.034912109375,
-0.01268768310546875,
-0.00954437255859375,
0.0053558349609375,
-0.0162506103515625,
0.04046630859375,
0.03338623046875,
-0.057342529296875,
-0.0565185546875,
-0.045867919921875,
-0... |
Falah/emotion_prompts | 2023-10-05T05:53:31.000Z | [
"region:us"
] | Falah | null | null | 0 | 24 | 2023-10-05T05:53:29 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 4626262
num_examples: 10000
download_size: 669543
dataset_size: 4626262
---
# Dataset Card for "emotion_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 360 | [
[
-0.0506591796875,
-0.019866943359375,
0.0264892578125,
0.02886962890625,
-0.0138397216796875,
-0.01081085205078125,
0.01067352294921875,
0.003276824951171875,
0.058349609375,
0.011993408203125,
-0.08203125,
-0.050506591796875,
-0.038238525390625,
-0.00123310... |
Trelis/stanford-NIL-disclosure-ft | 2023-10-17T09:34:27.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"NIL",
"region:us"
] | Trelis | null | null | 0 | 24 | 2023-10-06T08:43:16 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- NIL
size_categories:
- n<1K
---
# NIL Policy
Data is taken from the [Stanford website](https://gostanford.com/sports/2022/11/11/nil-student-athletes.aspx).
Data is chunked into rows for the training set.
The test.csv dataset is generated using Llama 70B to extract key takeaways from the raw text.
For educational and non-commercial use only. | 422 | [
[
-0.006999969482421875,
-0.0489501953125,
0.0190582275390625,
0.016265869140625,
-0.01039886474609375,
0.0084228515625,
0.0086517333984375,
-0.0260009765625,
0.0305633544921875,
0.05291748046875,
-0.075439453125,
-0.0202789306640625,
-0.0005965232849121094,
-... |
Sharathhebbar24/Indian-Constitution | 2023-10-06T12:57:27.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | Sharathhebbar24 | null | null | 0 | 24 | 2023-10-06T12:16:20 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
- text2text-generation
language:
- en
---
# Indian Constitution Dataset
The dataset can be used for text classification, text generation and text2text generation | 244 | [
[
-0.0075531005859375,
-0.0094146728515625,
-0.0241241455078125,
0.0418701171875,
-0.04974365234375,
-0.0026607513427734375,
-0.0028171539306640625,
-0.0030155181884765625,
-0.02105712890625,
0.049560546875,
-0.0212860107421875,
-0.0028400421142578125,
-0.01867675... |
MaxReynolds/Lee_Souder_Combined | 2023-10-06T20:24:19.000Z | [
"region:us"
] | MaxReynolds | null | null | 0 | 24 | 2023-10-06T20:24:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1084639.0
num_examples: 37
download_size: 1080965
dataset_size: 1084639.0
---
# Dataset Card for "Lee_Souder_Combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.038604736328125,
-0.010467529296875,
0.01064300537109375,
0.00283050537109375,
-0.00519561767578125,
-0.00011217594146728516,
0.01641845703125,
-0.004062652587890625,
0.0595703125,
0.04034423828125,
-0.06671142578125,
-0.042572021484375,
-0.033416748046875,
... |
infCapital/investopedia_terms_en | 2023-10-07T15:25:31.000Z | [
"region:us"
] | infCapital | null | null | 2 | 24 | 2023-10-07T14:59:07 | ---
dataset_info:
features:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25479415
num_examples: 6305
download_size: 13609845
dataset_size: 25479415
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "investopedia_terms_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.039337158203125,
-0.0188751220703125,
0.00946044921875,
0.01715087890625,
-0.0308074951171875,
0.01334381103515625,
0.0213623046875,
-0.02349853515625,
0.0843505859375,
0.032012939453125,
-0.050872802734375,
-0.057342529296875,
-0.041168212890625,
-0.0008... |
ismailiismail/paragraphss_paraphrasing | 2023-10-07T19:59:35.000Z | [
"region:us"
] | ismailiismail | null | null | 0 | 24 | 2023-10-07T17:57:56 | ---
dataset_info:
features:
- name: phrase
dtype: string
- name: paraphrase
dtype: string
splits:
- name: train
num_bytes: 1848761
num_examples: 1000
download_size: 963985
dataset_size: 1848761
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paragraphss_paraphrasing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.0252227783203125,
-0.0307769775390625,
0.03033447265625,
0.040496826171875,
-0.0252532958984375,
-0.019866943359375,
0.01262664794921875,
0.01337432861328125,
0.038360595703125,
0.047271728515625,
-0.048309326171875,
-0.0521240234375,
-0.038360595703125,
... |
nandyc/ASL_Isolated_Swin_dataset | 2023-10-09T10:30:57.000Z | [
"region:us"
] | nandyc | null | null | 1 | 24 | 2023-10-09T10:30:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
'5': F
'6': G
'7': H
'8': I
'9': J
'10': K
'11': L
'12': M
'13': N
'14': O
'15': P
'16': Q
'17': R
'18': S
'19': T
'20': U
'21': V
'22': W
'23': X
'24': Y
'25': Z
splits:
- name: train
num_bytes: 19265862.93533333
num_examples: 1468
- name: test
num_bytes: 3392183.4166666665
num_examples: 260
download_size: 22665194
dataset_size: 22658046.351999998
---
# Dataset Card for "ASL_Isolated_Swin_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,103 | [
[
-0.029937744140625,
-0.0197906494140625,
-0.01543426513671875,
0.03814697265625,
-0.015533447265625,
0.00922393798828125,
-0.00411224365234375,
-0.02374267578125,
0.043975830078125,
0.032958984375,
-0.06103515625,
-0.0611572265625,
-0.03143310546875,
-0.0161... |
erbacher/trivia_qa5 | 2023-10-12T22:25:03.000Z | [
"region:us"
] | erbacher | null | null | 0 | 24 | 2023-10-12T16:20:52 | ---
dataset_info:
features:
- name: target
dtype: string
- name: query
dtype: string
- name: gold_generation
sequence: string
- name: text
dtype: string
- name: results
dtype: string
- name: em
dtype: float64
- name: hal_m
dtype: string
splits:
- name: train
num_bytes: 73599939
num_examples: 78785
- name: dev
num_bytes: 8307250
num_examples: 8837
- name: test
num_bytes: 10650305
num_examples: 11313
download_size: 33930791
dataset_size: 92557494
---
# Dataset Card for "trivia_qa5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 697 | [
[
-0.046417236328125,
-0.007587432861328125,
0.0294036865234375,
0.01288604736328125,
-0.01953125,
0.0118255615234375,
0.035491943359375,
-0.01499176025390625,
0.0540771484375,
0.0279388427734375,
-0.053009033203125,
-0.06451416015625,
-0.0245513916015625,
0.0... |
922-CA/MoCha_v1 | 2023-10-15T00:04:06.000Z | [
"license:openrail",
"region:us"
] | 922-CA | null | null | 0 | 24 | 2023-10-12T21:52:37 | ---
license: openrail
---
# Monika Chat v1 (10152023)
* dataset of ~680 items (dialogue scraped from game, reddit, and Twitter)
* these items were augmented by [l2-7b-monika-v0.3c1](https://huggingface.co/922-CA/llama-2-7b-monika-v0.3c1) to turn each into snippets of multi-turn chat dialogue between Player and Monika
* finally, these were then manually edited, with more manually crafted items including info about character added in | 436 | [
[
-0.033355712890625,
-0.06854248046875,
0.0328369140625,
0.00576019287109375,
-0.0281524658203125,
0.0213775634765625,
0.0159149169921875,
-0.06256103515625,
0.07342529296875,
0.045745849609375,
-0.0806884765625,
-0.006992340087890625,
-0.042816162109375,
0.0... |
alexrs/alpaca-cleaned-15-clusters | 2023-10-16T14:43:05.000Z | [
"region:us"
] | alexrs | null | null | 0 | 24 | 2023-10-16T14:43:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 40490946
num_examples: 51760
download_size: 24185910
dataset_size: 40490946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-cleaned-15-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.05889892578125,
-0.02740478515625,
0.0234222412109375,
0.02142333984375,
-0.0236663818359375,
-0.0029850006103515625,
0.017547607421875,
-0.021728515625,
0.07293701171875,
0.0384521484375,
-0.06500244140625,
-0.0655517578125,
-0.0382080078125,
-0.00949859... |
mb23/GraySpectrogram3 | 2023-10-20T04:53:58.000Z | [
"region:us"
] | mb23 | null | null | 0 | 24 | 2023-10-20T04:51:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1516124741.75
num_examples: 13258
- name: test
num_bytes: 529969804.75
num_examples: 4722
download_size: 2041762730
dataset_size: 2046094546.5
---
# Dataset Card for "GraySpectrogram3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 599 | [
[
-0.0494384765625,
0.00666046142578125,
0.032440185546875,
0.037078857421875,
-0.0195465087890625,
-0.00623321533203125,
0.0276031494140625,
-0.023590087890625,
0.047943115234375,
0.0260162353515625,
-0.05633544921875,
-0.057220458984375,
-0.046142578125,
-0.... |
davidfant/natural-questions-chunk-1 | 2023-10-22T22:52:24.000Z | [
"region:us"
] | davidfant | null | null | 0 | 24 | 2023-10-22T22:48:50 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4690314797
num_examples: 10000
download_size: 1819108926
dataset_size: 4690314797
---
# Dataset Card for "natural-questions-chunk-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.065673828125,
-0.0682373046875,
0.0079498291015625,
0.0219879150390625,
-0.033782958984375,
-0.0027637481689453125,
0.0174102783203125,
-0.0188751220703125,
0.073486328125,
0.04815673828125,
-0.06884765625,
-0.023101806640625,
-0.0258941650390625,
-0.0008... |
ppxscal/aminer-citation-graphv14-jaccard | 2023-10-24T01:56:10.000Z | [
"region:us"
] | ppxscal | null | null | 0 | 24 | 2023-10-23T14:13:25 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Contains text pairs from https://www.aminer.org/citation v14. Similairty socres calculated with Jaccard index.
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,674 | [
[
-0.03741455078125,
-0.037078857421875,
0.0123443603515625,
0.0160369873046875,
-0.0283355712890625,
-0.01435089111328125,
-0.003643035888671875,
-0.04913330078125,
0.044830322265625,
0.05841064453125,
-0.05755615234375,
-0.0694580078125,
-0.04083251953125,
0... |
ashish-chouhan/arxiv_cs_papers | 2023-10-24T13:31:08.000Z | [
"region:us"
] | ashish-chouhan | null | null | 0 | 24 | 2023-10-24T12:39:22 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
- name: authors
sequence: string
- name: published
dtype: string
- name: url
dtype: string
- name: pdf_url
dtype: string
- name: arxiv_id
dtype: string
splits:
- name: train
num_bytes: 7726383
num_examples: 5000
download_size: 4366827
dataset_size: 7726383
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv_cs_papers"
This dataset contains the subset of ArXiv papers with the "cs.LG" tag to indicate the paper is about Machine Learning.
The core dataset is filtered from the full ArXiv dataset hosted on Kaggle: https://www.kaggle.com/datasets/Cornell-University/arxiv. The original dataset contains roughly 2 million papers. This dataset contains roughly 100,000 papers following the category filtering.
The dataset is maintained with requests to the ArXiv API.
The ArXiv dataset contains features:
<ul>
<li> title </li>
<li> abstract </li>
<li> authors </li>
<li> published </li>
<li> url </li>
<li> pdf_url </li>
<li> arxiv_id </li>
</ul> | 1,158 | [
[
-0.03094482421875,
-0.0345458984375,
0.0159912109375,
-0.0262298583984375,
-0.0203857421875,
0.026397705078125,
0.005565643310546875,
-0.01096343994140625,
-0.0029277801513671875,
0.04351806640625,
-0.0227508544921875,
-0.050689697265625,
-0.034637451171875,
... |
leeseeun/tokenzied_512_news_2gb_data | 2023-10-26T02:24:31.000Z | [
"region:us"
] | leeseeun | null | null | 0 | 24 | 2023-10-26T02:04:19 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2232750420
num_examples: 1088085
download_size: 0
dataset_size: 2232750420
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tokenzied_512_news_2gb_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.0322265625,
-0.030181884765625,
0.005268096923828125,
0.031280517578125,
-0.033782958984375,
-0.00110626220703125,
0.0127716064453125,
-0.01116943359375,
0.07159423828125,
0.038421630859375,
-0.052490234375,
-0.0504150390625,
-0.0426025390625,
-0.03424072... |
thanhduycao/soict_train_dataset_v2 | 2023-10-27T18:02:56.000Z | [
"region:us"
] | thanhduycao | null | null | 0 | 24 | 2023-10-27T18:01:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: sentence_norm
dtype: string
- name: wer
dtype: float64
splits:
- name: train
num_bytes: 4196405867
num_examples: 8181
- name: test
num_bytes: 565495055
num_examples: 1092
download_size: 1121417074
dataset_size: 4761900922
---
# Dataset Card for "soict_train_dataset_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 785 | [
[
-0.0225372314453125,
-0.0028820037841796875,
0.01197052001953125,
0.020843505859375,
-0.018341064453125,
-0.016937255859375,
0.028350830078125,
-0.0102081298828125,
0.049407958984375,
0.03424072265625,
-0.0670166015625,
-0.0272064208984375,
-0.04522705078125,
... |
juny116/few_glue | 2021-08-13T05:37:37.000Z | [
"arxiv:2012.15723",
"region:us"
] | juny116 | SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard. | @article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | 1 | 23 | 2022-03-02T23:29:22 | # FewGLUE_32dev
This repository contains the FewGLUE_32dev dataset, an extension of the [FewGLUE](https://github.com/timoschick/fewglue), which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in [previous work](https://arxiv.org/abs/2012.15723) that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
### Data Format
The data files follow the exact same format as [SuperGLUE task files](https://super.gluebenchmark.com/tasks).
### Structure
For each SuperGLUE task `T`, the directory `FewGLUE_32dev/T` contains the 32-sample-dev file (`dev32.jsonl`), which consists of 32 examples for few-shot validation.
| 847 | [
[
-0.043731689453125,
-0.0261077880859375,
0.0112457275390625,
0.00698089599609375,
-0.0007305145263671875,
0.007465362548828125,
-0.006366729736328125,
-0.027252197265625,
0.0027065277099609375,
0.01788330078125,
-0.06219482421875,
-0.0552978515625,
-0.0310974121... |
sebastiaan/test-cefr | 2021-11-30T17:15:26.000Z | [
"region:us"
] | sebastiaan | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 3 | 23 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
hackathon-pln-es/neutral-es | 2022-10-25T10:20:48.000Z | [
"task_categories:text2text-generation",
"task_categories:translation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | hackathon-pln-es | null | null | 6 | 23 | 2022-03-31T18:02:00 | ---
language:
- es
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: neutralES
---
# Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 | 4,356 | [
[
-0.03253173828125,
-0.033538818359375,
0.01329803466796875,
0.042388916015625,
-0.01230621337890625,
-0.01026153564453125,
0.0029621124267578125,
-0.0229339599609375,
0.022918701171875,
0.0307159423828125,
-0.040069580078125,
-0.060333251953125,
-0.0193786621093... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.