author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jjoverv | null | null | null | false | null | false | jjoverv/redescubrastyle | 2022-11-10T12:14:44.000Z | null | false | 399d15484ac32ea5b97b3a975ff6ba2a1ee921e6 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/jjoverv/redescubrastyle/resolve/main/README.md | ---
license: openrail
---
|
cakiki | null | null | null | false | 1 | false | cakiki/html_paths | 2022-11-10T12:28:11.000Z | null | false | adda3417bbc9cb098de689b7ff70c50abe247735 | [] | [] | https://huggingface.co/datasets/cakiki/html_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 904459341
num_examples: 32312575
download_size: 586813218
dataset_size: 904459341
---
# Dataset Card for "html_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/lua_paths | 2022-11-10T12:28:33.000Z | null | false | 4571f733649eb652dc3f5177bef1ec9d50b23f76 | [] | [] | https://huggingface.co/datasets/cakiki/lua_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 21014952
num_examples: 808034
download_size: 11839424
dataset_size: 21014952
---
# Dataset Card for "lua_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_waermeVersorgenHamburg | 2022-11-10T12:29:23.000Z | null | false | 35bad6df05059f02bd14aaf25a6150631340c150 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_waermeVersorgenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: Erzeugen
1: Speichern
2: Verteilen
splits:
- name: test
num_bytes: 11823.674832962139
num_examples: 45
- name: train
num_bytes: 94326.65033407572
num_examples: 359
- name: valid
num_bytes: 11823.674832962139
num_examples: 45
download_size: 36921
dataset_size: 117973.99999999999
---
# Dataset Card for "klassifizierung_waermeVersorgenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/css_paths | 2022-11-10T12:31:36.000Z | null | false | c6c5c093ae298bc26353208f8fde21b423857736 | [] | [] | https://huggingface.co/datasets/cakiki/css_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 158651499
num_examples: 5726933
download_size: 138140586
dataset_size: 158651499
---
# Dataset Card for "css_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/visual-basic_paths | 2022-11-10T12:31:50.000Z | null | false | c63e568f42042af30725cbb49a850dd5baa5f528 | [] | [] | https://huggingface.co/datasets/cakiki/visual-basic_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 5643571
num_examples: 200013
download_size: 1586937
dataset_size: 5643571
---
# Dataset Card for "visual-basic_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/sql_paths | 2022-11-10T12:32:29.000Z | null | false | a8591f85e6cd6f947bdd9363baefd5cc922951ad | [] | [] | https://huggingface.co/datasets/cakiki/sql_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 35050567
num_examples: 1267490
download_size: 23626806
dataset_size: 35050567
---
# Dataset Card for "sql_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/tex_paths | 2022-11-10T12:32:48.000Z | null | false | d0c81062b8b7d00c6beb0ef721f1c81d97ead65d | [] | [] | https://huggingface.co/datasets/cakiki/tex_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 12350897
num_examples: 448193
download_size: 6578383
dataset_size: 12350897
---
# Dataset Card for "tex_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_sichernHamburg | 2022-11-10T12:35:05.000Z | null | false | 03c8cf71d80b881a9dcc34a70022f09756db5315 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_sichernHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: Brandmeldeanlage
1: Brandschutz
splits:
- name: test
num_bytes: 466.72727272727275
num_examples: 2
- name: train
num_bytes: 1866.909090909091
num_examples: 8
- name: valid
num_bytes: 233.36363636363637
num_examples: 1
download_size: 11350
dataset_size: 2567.0000000000005
---
# Dataset Card for "klassifizierung_sichernHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/php_paths | 2022-11-10T12:40:45.000Z | null | false | 9ff5bde3da778a10f68fc440cebdf798f08e6c61 | [] | [] | https://huggingface.co/datasets/cakiki/php_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 910857017
num_examples: 34179448
download_size: 787090086
dataset_size: 910857017
---
# Dataset Card for "php_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/julia_paths | 2022-11-10T12:41:03.000Z | null | false | 69dc5d4d868e74ccbb29f887b6fdbeded3447ffd | [] | [] | https://huggingface.co/datasets/cakiki/julia_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 14862518
num_examples: 473425
download_size: 7932474
dataset_size: 14862518
---
# Dataset Card for "julia_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/assembly_paths | 2022-11-10T12:41:17.000Z | null | false | ff60a980c29a5af1c4dddbbdbc475fe6106ad698 | [] | [] | https://huggingface.co/datasets/cakiki/assembly_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 7492209
num_examples: 324343
download_size: 2131380
dataset_size: 7492209
---
# Dataset Card for "assembly_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/makefile_paths | 2022-11-10T12:41:44.000Z | null | false | a4b3aab622234840a05f4c79520adbc9a7179844 | [] | [] | https://huggingface.co/datasets/cakiki/makefile_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 28586262
num_examples: 1087444
download_size: 22517681
dataset_size: 28586262
---
# Dataset Card for "makefile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_medienVersorgenHamburg | 2022-11-10T12:46:08.000Z | null | false | 0bd10ffa85c933731767e37176b8dd6c8063c079 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_medienVersorgenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: Bereitstellen
1: Verteilen
splits:
- name: train
num_bytes: 1204
num_examples: 5
download_size: 3991
dataset_size: 1204
---
# Dataset Card for "klassifizierung_medienVersorgenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_luftVersorgenHamburg | 2022-11-10T12:50:07.000Z | null | false | 88f5c2181ce605905f126a55dc3a31431c0eae38 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_luftVersorgenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: Bereitstellen
splits:
- name: test
num_bytes: 5709.746341463415
num_examples: 21
- name: train
num_bytes: 44590.4
num_examples: 164
- name: valid
num_bytes: 5437.8536585365855
num_examples: 20
download_size: 23061
dataset_size: 55738.00000000001
---
# Dataset Card for "klassifizierung_luftVersorgenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/javascript_paths | 2022-11-10T12:55:18.000Z | null | false | 933d86766ee35adaa6be89d23a30229113bf7f35 | [] | [] | https://huggingface.co/datasets/cakiki/javascript_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 1086652130
num_examples: 39278951
download_size: 931947481
dataset_size: 1086652130
---
# Dataset Card for "javascript_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/markdown_paths | 2022-11-10T13:02:25.000Z | null | false | 2e12d8e250ea5827ad64d8481e2dd01122c0bb91 | [] | [] | https://huggingface.co/datasets/cakiki/markdown_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 821714901
num_examples: 28965353
download_size: 663085249
dataset_size: 821714901
---
# Dataset Card for "markdown_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/batchfile_paths | 2022-11-10T13:02:40.000Z | null | false | b60d3d46f9388d56418c4f7fea1904c3cd6bc4bc | [] | [] | https://huggingface.co/datasets/cakiki/batchfile_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 11616420
num_examples: 423086
download_size: 8986923
dataset_size: 11616420
---
# Dataset Card for "batchfile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LazyLabChina | null | null | null | false | null | false | LazyLabChina/shuimodancai | 2022-11-12T01:48:13.000Z | null | false | c52da2ae2bd53896544c8ab1ee499f8df97adeb4 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/LazyLabChina/shuimodancai/resolve/main/README.md | ---
license: afl-3.0
---
|
cakiki | null | null | null | false | 1 | false | cakiki/c_paths | 2022-11-10T13:08:59.000Z | null | false | a480c7b3807ccb6f174055ee918386bc4016975f | [] | [] | https://huggingface.co/datasets/cakiki/c_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 508253008
num_examples: 19878729
download_size: 359733499
dataset_size: 508253008
---
# Dataset Card for "c_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/ruby_paths | 2022-11-10T13:10:29.000Z | null | false | 66d6e22b5b7a0acf146d5c9bf9a89934b9012d07 | [] | [] | https://huggingface.co/datasets/cakiki/ruby_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 169345268
num_examples: 6390966
download_size: 118905787
dataset_size: 169345268
---
# Dataset Card for "ruby_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/haskell_paths | 2022-11-10T13:10:54.000Z | null | false | aa66f3d11cfbdf7c557af0a7252cb2550413770d | [] | [] | https://huggingface.co/datasets/cakiki/haskell_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 23059551
num_examples: 921236
download_size: 12139516
dataset_size: 23059551
---
# Dataset Card for "haskell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/fortran_paths | 2022-11-10T13:11:08.000Z | null | false | bc25ad35e65d45db2445c945f47da9b4ed4fcca4 | [] | [] | https://huggingface.co/datasets/cakiki/fortran_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 5773596
num_examples: 243762
download_size: 1463437
dataset_size: 5773596
---
# Dataset Card for "fortran_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
siberspace | null | null | null | false | null | false | siberspace/gaelle2 | 2022-11-10T13:12:33.000Z | null | false | 5b7e5b14fa32bbb804ec50fab5d5b2277e81d440 | [] | [] | https://huggingface.co/datasets/siberspace/gaelle2/resolve/main/README.md | |
cakiki | null | null | null | false | 1 | false | cakiki/c-sharp_paths | 2022-11-10T13:16:06.000Z | null | false | c2e203ac9c1a0484cf21a7fd6fff2104f0031b31 | [] | [] | https://huggingface.co/datasets/cakiki/c-sharp_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 586063746
num_examples: 20539828
download_size: 439948378
dataset_size: 586063746
---
# Dataset Card for "c-sharp_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/rust_paths | 2022-11-10T13:17:18.000Z | null | false | 9a8a57728e5c5b7dd93f45e9cf93b45d0b8ab54a | [] | [] | https://huggingface.co/datasets/cakiki/rust_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 71297350
num_examples: 3087525
download_size: 49706578
dataset_size: 71297350
---
# Dataset Card for "rust_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/typescript_paths | 2022-11-10T13:22:14.000Z | null | false | f3f85f8988d13dc427f042efc6e603481d8d3a08 | [] | [] | https://huggingface.co/datasets/cakiki/typescript_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 536493166
num_examples: 19441648
download_size: 434213958
dataset_size: 536493166
---
# Dataset Card for "typescript_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/scala_paths | 2022-11-10T13:23:00.000Z | null | false | 70b7bb8cd6c36c701f62f41b0635c8124ac8336d | [] | [] | https://huggingface.co/datasets/cakiki/scala_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 68488532
num_examples: 2635793
download_size: 35187635
dataset_size: 68488532
---
# Dataset Card for "scala_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Triton100 | null | null | null | false | 4 | false | Triton100/KiryuuMichiru2 | 2022-11-10T16:02:59.000Z | null | false | a149c3a5039b193bc29a0f57dada1982400f30b2 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/Triton100/KiryuuMichiru2/resolve/main/README.md | ---
license: bsd
--- |
cakiki | null | null | null | false | 1 | false | cakiki/python_paths | 2022-11-10T13:29:19.000Z | null | false | 03199d030fdc050ffa9df8a29028d3128fea03ad | [] | [] | https://huggingface.co/datasets/cakiki/python_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 636121755
num_examples: 23578465
download_size: 550836738
dataset_size: 636121755
---
# Dataset Card for "python_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/perl_paths | 2022-11-10T13:29:38.000Z | null | false | 471372c3b0255eff62320029c85ab2cd40afd8dc | [] | [] | https://huggingface.co/datasets/cakiki/perl_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 14604805
num_examples: 554602
download_size: 4964930
dataset_size: 14604805
---
# Dataset Card for "perl_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/go_paths | 2022-11-10T13:33:17.000Z | null | false | 6f7ef421b610ca5c88bbb30883740e3d040127aa | [] | [] | https://huggingface.co/datasets/cakiki/go_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 301556518
num_examples: 12078461
download_size: 219608192
dataset_size: 301556518
---
# Dataset Card for "go_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bgstud | null | null | null | false | 1 | false | bgstud/data-librispeech100 | 2022-11-10T13:39:01.000Z | null | false | 275fd28622d04fe9cba55698fc89a51fef7c5a80 | [] | [] | https://huggingface.co/datasets/bgstud/data-librispeech100/resolve/main/README.md | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: librispeech-train-100
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
cakiki | null | null | null | false | 1 | false | cakiki/java_paths | 2022-11-10T13:44:12.000Z | null | false | 3b71db65661fda9f992bae1b64de5422d46b96fd | [] | [] | https://huggingface.co/datasets/cakiki/java_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 1168673674
num_examples: 43005815
download_size: 919178767
dataset_size: 1168673674
---
# Dataset Card for "java_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/powershell_paths | 2022-11-10T13:44:30.000Z | null | false | ccad17bea366c31a400121ceab9c11b60811f7f2 | [] | [] | https://huggingface.co/datasets/cakiki/powershell_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 15534114
num_examples: 521952
download_size: 7947926
dataset_size: 15534114
---
# Dataset Card for "powershell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
boxzero | null | null | null | false | 1 | false | boxzero/testing | 2022-11-10T15:21:54.000Z | null | false | 9fabb23ada1c8a22dfda37e6e417d72afd5564a6 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/boxzero/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
kartucha | null | null | null | false | null | false | kartucha/kartucha | 2022-11-10T15:24:41.000Z | null | false | dde1418cf31cd68c426c5c6a9da63125f45469bc | [] | [] | https://huggingface.co/datasets/kartucha/kartucha/resolve/main/README.md | |
ChristophSchuhmann | null | null | null | false | null | false | ChristophSchuhmann/imagenet1k-by-SD-V1.4 | 2022-11-10T15:33:02.000Z | null | false | fa67acb659b8b5a3d212c786a1f9ce545d20ee71 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ChristophSchuhmann/imagenet1k-by-SD-V1.4/resolve/main/README.md | ---
license: apache-2.0
---
|
Aubing-Hou | null | null | null | false | null | false | Aubing-Hou/KillersofThreeKingdoms | 2022-11-10T15:50:54.000Z | null | false | 76e45f52830d51bef8c84e6bb4319c5b9c45c012 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Aubing-Hou/KillersofThreeKingdoms/resolve/main/README.md | ---
license: openrail
---
|
rexelecaps | null | null | null | false | null | false | rexelecaps/Dateset | 2022-11-10T15:59:26.000Z | null | false | deef869dedc5fc3fdf55dc905e8b4b27e379eb85 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/rexelecaps/Dateset/resolve/main/README.md | ---
license: unknown
---
|
bgstud | null | null | null | false | 3 | false | bgstud/libri | 2022-11-10T20:03:23.000Z | acronym-identification | false | e1e61ab74cb5165978b478962f0432a3209e194f | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:token-classification-other-acronym-identification"
] | https://huggingface.co/datasets/bgstud/libri/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
nateraw | null | null | null | false | null | false | nateraw/video-demo | 2022-11-10T21:16:49.000Z | null | false | 8e566ce5b7c2fd05f61fb2b2fe4f2520706607d1 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nateraw/video-demo/resolve/main/README.md | ---
license: mit
---
|
JohnnyBoy00 | null | null | null | false | 1 | false | JohnnyBoy00/saf_micro_job_german | 2022-11-16T17:50:27.000Z | null | false | 9424b75f2f86ba2963ff002b31ebae6490c7ea38 | [] | [
"annotations_creators:expert-generated",
"language:de",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:short answer feedback",
"tags:micro job",
"task_categories:text2text-generation"
] | https://huggingface.co/datasets/JohnnyBoy00/saf_micro_job_german/resolve/main/README.md | ---
pretty_name: SAF - Micro Job - German
annotations_creators:
- expert-generated
language:
- de
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- micro job
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 885526
num_examples: 1226
- name: validation
num_bytes: 217946
num_examples: 308
- name: test_unseen_answers
num_bytes: 198832
num_examples: 271
- name: test_unseen_questions
num_bytes: 545524
num_examples: 602
download_size: 274603
dataset_size: 1847828
---
# Dataset Card for "saf_micro_job_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 8 German questions used in micro-job training on the crowd-worker platform appJobber - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the [saf_communication_networks_english](https://huggingface.co/datasets/JohnnyBoy00/saf_communication_networks_english) dataset to examine the English subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at [saf_legal_domain_german](https://huggingface.co/datasets/JohnnyBoy00/saf_legal_domain_german).
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Frage 1: Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt!",
"verification_feedback": "Correct",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1226| 308| 271| 602|
## Dataset Creation
### Annotation Process
Two experienced appJobber employees were selected to evaluate the crowd-worker platform’s answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
## Additional Information
### Citation Information
```
@inproceedings{filighera-etal-2022-answer,
title = "Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset",
author = "Filighera, Anna and
Parihar, Siddharth and
Steuer, Tim and
Meuser, Tobias and
Ochs, Sebastian",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.587",
doi = "10.18653/v1/2022.acl-long.587",
pages = "8577--8591",
}
```
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. |
JohnnyBoy00 | null | null | null | false | null | false | JohnnyBoy00/saf_communication_networks_english | 2022-11-16T17:50:49.000Z | null | false | bca9e581de553fd11761ecd0923978ab0cc1e085 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:short answer feedback",
"tags:communication networks",
"task_categories:text2text-generation"
] | https://huggingface.co/datasets/JohnnyBoy00/saf_communication_networks_english/resolve/main/README.md | ---
pretty_name: SAF - Communication Networks - English
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- communication networks
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2363828
num_examples: 1700
- name: validation
num_bytes: 592869
num_examples: 427
- name: test_unseen_answers
num_bytes: 515669
num_examples: 375
- name: test_unseen_questions
num_bytes: 777945
num_examples: 479
download_size: 941169
dataset_size: 4250311
---
# Dataset Card for "saf_communication_networks_english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 22 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the [saf_micro_job_german](https://huggingface.co/datasets/JohnnyBoy00/saf_micro_job_german) dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at [saf_legal_domain_german](https://huggingface.co/datasets/JohnnyBoy00/saf_legal_domain_german).
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in English.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Is this a question?",
"reference_answer": "Yes, that is a question.",
"provided_answer": "I'm certain this is a question.",
"answer_feedback": "The response is correct.",
"verification_feedback": "Correct",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1700| 427| 375| 479|
## Dataset Creation
### Annotation Process
Two graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
## Additional Information
### Citation Information
```
@inproceedings{filighera-etal-2022-answer,
title = "Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset",
author = "Filighera, Anna and
Parihar, Siddharth and
Steuer, Tim and
Meuser, Tobias and
Ochs, Sebastian",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.587",
doi = "10.18653/v1/2022.acl-long.587",
pages = "8577--8591",
}
```
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. |
bishalbaaniya | null | null | null | false | null | false | bishalbaaniya/my_en_2 | 2022-11-10T22:56:24.000Z | null | false | 8024654c829eb2cc7d6b5d64ebc2fbf8c42faf27 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/bishalbaaniya/my_en_2/resolve/main/README.md | ---
license: apache-2.0
---
|
FAERS-PubMed | null | null | null | false | 27 | false | FAERS-PubMed/seriousness_prediction | 2022-11-11T00:40:13.000Z | null | false | 58c31c30a25e06ae48a0c227691fddbad8f2b0b6 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/seriousness_prediction/resolve/main/README.md | ---
dataset_info:
features:
- name: article_articletitle
dtype: string
- name: article_pmid
dtype: string
- name: article_abstract
dtype: string
- name: article_authorlist
list:
- name: CollectiveName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: LastName
dtype: string
- name: Suffix
dtype: string
- name: article_journalinfo
dtype: string
- name: article_datecompleted
dtype: string
- name: article_daterevised
dtype: string
- name: article_pubmed_filename
dtype: string
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
splits:
- name: test
num_bytes: 185902645
num_examples: 103646
- name: train
num_bytes: 774529245
num_examples: 483665
- name: validation
num_bytes: 108410693
num_examples: 65856
download_size: 480429160
dataset_size: 1068842583
---
# Dataset Card for "seriousness_prediction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Genius1237 | null | null | null | false | null | false | Genius1237/TyDiP | 2022-10-22T23:45:41.000Z | null | false | 3e7c7a6479214d2c514765952c11020dee572d81 | [] | [] | https://huggingface.co/datasets/Genius1237/TyDiP/resolve/main/README.md | # TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages
This repo contains the code and data for the EMNLP 2022 findings paper TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages which can be found [here]().
## Data
The TyDiP dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
The `data` folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.
```
data/
├── all
├── binary
└── unlabelled_train_sets
```
`data/all` consists of the complete train and test sets.
`data/binary` is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
`data/unlabelled_train_sets`
If you use the English train or test data, please cite the Stanford Politeness Dataset
```
@inproceedings{danescu-niculescu-mizil-etal-2013-computational,
title = "A computational approach to politeness with application to social factors",
author = "Danescu-Niculescu-Mizil, Cristian and
Sudhof, Moritz and
Jurafsky, Dan and
Leskovec, Jure and
Potts, Christopher",
booktitle = "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P13-1025",
pages = "250--259",
}
```
If you use the test data from the 9 target languages, please cite our paper
```
@inproceedings{srinivasan-choi-2022-tydip,
title = "TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages",
author = "Srinivasan, Anirudh and
Choi, Eunsol",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
}
```
## Code
`politeness_regresor.py` is used for training and evaluation of transformer models
To train a model
```
python politeness_regressor.py --train_file data/binary/en_train_binary.csv --test_file data/binary/en_test_binary.csv --model_save_location model.pt --pretrained_model xlm-roberta-large --gpus 1 --batch_size 4 --accumulate_grad_batches 8 --max_epochs 5 --checkpoint_callback False --logger False --precision 16 --train --test --binary --learning_rate 5e-6
```
To test this trained model on $lang
```
python politeness_regressor.py --test_file data/binary/${lang}_test_binary.csv --load_model model.pt --gpus 1 --batch_size 32 --test --binary
```
## Politeness Strategies
`strategies` contains the processed strategy lexicon for different languages. `strategies/learnt_strategies.xlsx` contains the human edited strategies for 4 langauges
## Annotation Interface
`annotation.html` contains the UI used for conducting data annotation |
Tarssio | null | null | null | false | null | false | Tarssio/autotrain-data-nlp_text | 2022-11-14T08:54:58.000Z | null | false | 4c235a4480550ea2589c14a8cb825a97803dd8b0 | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/Tarssio/autotrain-data-nlp_text/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: nlp_text
## Dataset Description
This dataset has been automatically processed by AutoTrain for project nlp_text.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "isso pq ele se considera um enviado de Deus!!!! esse cara \u00e9 pat\u00e9tico!",
"target": 1
},
{
"text": "#forabolsonaro #forabolsominionsestupidos #lulapresopolitico #amazoniaficabolsonarosai #amazoniafica",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5600 |
| valid | 1400 |
|
Corran | null | null | null | false | 3 | false | Corran/SemanticScholarAbstracts | 2022-11-11T03:47:32.000Z | null | false | c08c13295ebd8111ab96879ceba43b99ec28afdb | [] | [] | https://huggingface.co/datasets/Corran/SemanticScholarAbstracts/resolve/main/README.md | ---
dataset_info:
features:
- name: corpusid
dtype: int64
- name: openaccessinfo
struct:
- name: externalids
struct:
- name: ACL
dtype: string
- name: ArXiv
dtype: string
- name: DOI
dtype: string
- name: MAG
dtype: string
- name: PubMedCentral
dtype: string
- name: license
dtype: string
- name: status
dtype: string
- name: url
dtype: string
- name: abstract
dtype: string
- name: updated
dtype: string
splits:
- name: train
num_bytes: 59461773143.463005
num_examples: 48314588
download_size: 37596463269
dataset_size: 59461773143.463005
---
# Dataset Card for "SemanticScholarAbstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
claytonsamples | null | null | null | false | 19 | false | claytonsamples/lab | 2022-11-11T02:58:25.000Z | null | false | 8491055a606b9b1ec690b39e36fbdf1fddb4c4bc | [] | [] | https://huggingface.co/datasets/claytonsamples/lab/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: kimwipes
1: nitrile gloves
2: petri dish
3: serological pipette
splits:
- name: train
num_bytes: 22915125.09
num_examples: 1415
download_size: 19042401
dataset_size: 22915125.09
---
# Dataset Card for "lab"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jinhybr | null | null | null | false | 15 | false | jinhybr/rvl_cdip_400_train_val_test | 2022-11-11T15:58:02.000Z | null | false | 13462a42e3375e80a9bd46a64c58e1bcaba77874 | [] | [] | https://huggingface.co/datasets/jinhybr/rvl_cdip_400_train_val_test/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: letter
1: form
2: email
3: handwritten
4: advertisement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 197669272.0
num_examples: 1600
- name: train
num_bytes: 781258280.0
num_examples: 6400
- name: validation
num_bytes: 191125740.0
num_examples: 1600
download_size: 1101475597
dataset_size: 1170053292.0
---
# Dataset Card for "rvl_cdip_400_train_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Format
````
DatasetDict({
test: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
train: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 6400
})
validation: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
})
```` |
armanc | null | null | null | false | 212 | false | armanc/pubmed-rct20k | 2022-11-11T08:23:24.000Z | null | false | 091aec1e2384a20b2b36eb96177755ca13dd0b42 | [] | [] | https://huggingface.co/datasets/armanc/pubmed-rct20k/resolve/main/README.md | The small 20K version of the Pubmed-RCT dataset by Dernoncourt et al (2017).
```
@article{dernoncourt2017pubmed,
title={Pubmed 200k rct: a dataset for sequential sentence classification in medical abstracts},
author={Dernoncourt, Franck and Lee, Ji Young},
journal={arXiv preprint arXiv:1710.06071},
year={2017}
}
```
Note: This is the cleaned up version by Jin and Szolovits (2018).
```
@article{jin2018hierarchical,
title={Hierarchical neural networks for sequential sentence classification in medical scientific abstracts},
author={Jin, Di and Szolovits, Peter},
journal={arXiv preprint arXiv:1808.06161},
year={2018}
}
``` |
armanc | null | null | null | false | 195 | false | armanc/ScienceQA | 2022-11-11T08:34:35.000Z | null | false | d9327e0fa300d66c0c577330a624a39626f1192e | [] | [] | https://huggingface.co/datasets/armanc/ScienceQA/resolve/main/README.md | This is the ScientificQA dataset by Saikh et al (2022).
```
@article{10.1007/s00799-022-00329-y,
author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak},
title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles},
year = {2022},
journal = {Int. J. Digit. Libr.},
month = {sep}
}
|
joddy | null | null | null | false | null | false | joddy/caption_1000_w600h600VG_dataset | 2022-11-11T05:19:36.000Z | null | false | 972da999ff7eaf78fc0503ab6e6f50b5f7702932 | [] | [] | https://huggingface.co/datasets/joddy/caption_1000_w600h600VG_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 119417305.0
num_examples: 833
download_size: 99672355
dataset_size: 119417305.0
---
# Dataset Card for "caption_1000_w600h600VG_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rifkiaputri | null | null | null | false | 1 | false | rifkiaputri/idk-mrc | 2022-11-14T05:01:05.000Z | null | false | d12e34d0b7bc68903cf41e44cd640b89d0beb3d4 | [] | [
"arxiv:2210.13778",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language:id",
"language_creators:machine-generated",
"language_creators:expert-generated",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|tydiqa",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/rifkiaputri/idk-mrc/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- id
language_creators:
- machine-generated
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: IDK-MRC
size_categories:
- 1K<n<10K
source_datasets:
- extended|tydiqa
tags: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for IDK-MRC
## Dataset Description
- **Repository:** [rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC)
- **Paper:** [arXiv](https://arxiv.org/abs/2210.13778)
- **Point of Contact:** [rifkiaputri](https://github.com/rifkiaputri)
### Dataset Summary
I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.
### Supported Tasks
IDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.
### Languages
Indonesian
## Dataset Structure
### Data Instances
```
{
"context": "Para ilmuwan menduga bahwa megalodon terlihat seperti hiu putih yang lebih kekar, walaupun hiu ini juga mungkin tampak seperti hiu raksasa (Cetorhinus maximus) atau hiu harimau-pasir (Carcharias taurus). Hewan ini dianggap sebagai salah satu predator terbesar dan terkuat yang pernah ada, dan fosil-fosilnya sendiri menunjukkan bahwa panjang maksimal hiu raksasa ini mencapai 18 m, sementara rata-rata panjangnya berkisar pada angka 10,5 m. Rahangnya yang besar memiliki kekuatan gigitan antara 110.000 hingga 180.000 newton. Gigi mereka tebal dan kuat, dan telah berevolusi untuk menangkap mangsa dan meremukkan tulang.",
"qas":
[
{
"id": "indonesian--6040202845759439489-1",
"is_impossible": false,
"question": "Apakah jenis hiu terbesar di dunia ?",
"answers":
[
{
"text": "megalodon",
"answer_start": 27
}
]
},
{
"id": "indonesian-0426116372962619813-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu terkecil di dunia?",
"answers":
[]
},
{
"id": "indonesian-2493757035872656854-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu betina terbesar di dunia?",
"answers":
[]
}
]
}
```
### Data Fields
Each instance has several fields:
- `context`: context passage/paragraph as a string
- `qas`: list of questions related to the `context`
- `id`: question ID as a string
- `is_impossible`: whether the question is unanswerable (impossible to answer) or not as a boolean
- `question`: question as a string
- `answers`: list of answers
- `text`: answer as a string
- `answer_start`: answer start index as an integer
### Data Splits
- `train`: 9,332 (5,042 answerable, 4,290 unanswerable)
- `valid`: 764 (382 answerable, 382 unanswerable)
- `test`: 844 (422 answerable, 422 unanswerable)
## Dataset Creation
### Curation Rationale
IDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.
### Annotations
#### Annotation process
In our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.
#### Who are the annotators?
We recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.
## Considerations for Using the Data
The paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.
Even so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.
## Additional Information
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@misc{putri2022idk,
doi = {10.48550/ARXIV.2210.13778},
url = {https://arxiv.org/abs/2210.13778},
author = {Putri, Rifki Afina and Oh, Alice},
title = {IDK-MRC: Unanswerable Questions for Indonesian Machine Reading Comprehension},
publisher = {arXiv},
year = {2022},
}
```
|
sdotmac | null | null | null | false | 42 | false | sdotmac/bayc | 2022-11-12T05:19:59.000Z | null | false | 22d6519d033e3b433daadb49b7fd258dc8c9d3e3 | [] | [] | https://huggingface.co/datasets/sdotmac/bayc/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 381887922.0
num_examples: 10000
download_size: 378097332
dataset_size: 381887922.0
---
# Dataset Card for "bayc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bgstud | null | null | null | false | 6 | false | bgstud/libri-whisper-raw | 2022-11-11T10:12:24.000Z | acronym-identification | false | 94c29b56186e07b267d8ae2610e94e7c8642048d | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:token-classification-other-acronym-identification"
] | https://huggingface.co/datasets/bgstud/libri-whisper-raw/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
lmqg | null | TBA | Question & answer generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa). | false | 932 | false | lmqg/qag_tweetqa | 2022-11-12T07:16:52.000Z | null | false | 959c21056aa310c50d30fee5999d123519e7e486 | [] | [
"license:cc-by-sa-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"task_categories:text-generation",
"task_ids:language-modeling",
"tags:question-generation"
] | https://huggingface.co/datasets/lmqg/qag_tweetqa/resolve/main/README.md | ---
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [TBA](TBA)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
"questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
"answers": [ "apologize", "30" ],
"questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|4536 | 583| 583|
## Citation Information
```
TBA
``` |
Trullser | null | null | null | false | 1 | false | Trullser/Datasetfortrain | 2022-11-11T13:25:08.000Z | null | false | bc4d33ef6c32be2053f5ff60a7768bd82f4b66e5 | [] | [
"license:other"
] | https://huggingface.co/datasets/Trullser/Datasetfortrain/resolve/main/README.md | ---
license: other
---
|
MartinMu | null | null | null | false | null | false | MartinMu/StandardDifusion | 2022-11-11T16:02:28.000Z | null | false | 8742dcd8ff947a290cd4ae6fab8384efd3b414c1 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/MartinMu/StandardDifusion/resolve/main/README.md | ---
license: openrail
---
|
Yubing | null | null | null | false | null | false | Yubing/standardBB | 2022-11-11T14:10:16.000Z | null | false | 8cf5f918e10ac5cc98e8ba9bb962e6994c19eb43 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Yubing/standardBB/resolve/main/README.md | ---
license: openrail
---
|
assq | null | null | null | false | null | false | assq/standard1 | 2022-11-11T14:11:23.000Z | null | false | 4ef68edef4dfb32234961dc0a95f07af7562baaa | [] | [
"license:openrail"
] | https://huggingface.co/datasets/assq/standard1/resolve/main/README.md | ---
license: openrail
---
|
aemska | null | null | null | false | null | false | aemska/stuhl | 2022-11-11T14:12:36.000Z | null | false | 421ab444b7496ceadc2d18876d53e471131cbca0 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/aemska/stuhl/resolve/main/README.md | ---
license: openrail
---
|
lmqg | null | TBA | Question & answer generation dataset based on SQuAD. | false | 6 | false | lmqg/qag_squad | 2022-11-15T15:29:16.000Z | null | false | 3fea0ff90b4fbae698700f283277cad633388c02 | [] | [
"license:cc-by-sa-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"task_categories:text-generation",
"task_ids:language-modeling",
"tags:question-generation"
] | https://huggingface.co/datasets/lmqg/qag_squad/resolve/main/README.md | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_squad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [TBA](TBA)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the SQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "\"4 Minutes\" was released as the album's lead single and peaked at number three on the Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist; \"4 Minutes\" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album, Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million, it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European dates, and after it ended, the total gross was $408 million.",
"questions": [
"Which single was released as the album's lead single?",
"Madonna surpassed which artist with the most top-ten hits?",
"4 minutes became Madonna's which number one single in the UK?",
"What is the name of the first tour with Live Nation?",
"How much did Stick and Sweet Tour grossed?"
],
"answers": [
"4 Minutes",
"Elvis Presley",
"thirteenth",
"Sticky & Sweet Tour",
"$280 million,"
],
"questions_answers": "question: Which single was released as the album's lead single?, answer: 4 Minutes | question: Madonna surpassed which artist with the most top-ten hits?, answer: Elvis Presley | question: 4 minutes became Madonna's which number one single in the UK?, answer: thirteenth | question: What is the name of the first tour with Live Nation?, answer: Sticky & Sweet Tour | question: How much did Stick and Sweet Tour grossed?, answer: $280 million,"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16462| 2067 | 2429|
## Citation Information
```
TBA
``` |
Anas-Uddin | null | null | null | false | null | false | Anas-Uddin/Anas-Uddin | 2022-11-11T14:18:43.000Z | null | false | 34e0afd3940437fcc80958ac2fd80e72f964e7b4 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Anas-Uddin/Anas-Uddin/resolve/main/README.md | ---
license: openrail
---
|
fdsghfdh | null | null | null | false | null | false | fdsghfdh/bjbhk | 2022-11-11T14:19:31.000Z | null | false | 6d62363678647455e3cea5039c1d8a6593e2bbf6 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/fdsghfdh/bjbhk/resolve/main/README.md | ---
license: openrail
---
|
Yubing | null | null | null | false | null | false | Yubing/dogs | 2022-11-11T14:54:31.000Z | null | false | 628487d37579276a90afaac4854807b1ed1a1252 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Yubing/dogs/resolve/main/README.md | ---
license: openrail
---
|
NiuDaVinci | null | null | null | false | null | false | NiuDaVinci/niu-davinci | 2022-11-11T17:01:29.000Z | null | false | 63a4bc8a5d87894a886edabbc67257b4ad7213d8 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/NiuDaVinci/niu-davinci/resolve/main/README.md | ---
license: afl-3.0
---
|
zhangxinran | null | null | null | false | 58 | false | zhangxinran/lolita-dress-CHIN | 2022-11-11T22:34:20.000Z | null | false | a50653db5bd9fcf01aa163087e2974ba1388f8da | [] | [] | https://huggingface.co/datasets/zhangxinran/lolita-dress-CHIN/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 704987635.0
num_examples: 993
download_size: 701091143
dataset_size: 704987635.0
---
# Dataset Card for "lolita-dress-CHIN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode | null | null | null | false | 15 | false | bigcode/the-stack-subset-py-js-java-450k | 2022-11-11T20:27:07.000Z | null | false | d7eb252f5386371ecd0ed64cb92deca3833a0a6b | [] | [] | https://huggingface.co/datasets/bigcode/the-stack-subset-py-js-java-450k/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
splits:
- name: train
num_bytes: 2572283937.691812
num_examples: 450000
download_size: 928888427
dataset_size: 2572283937.691812
---
# Dataset Card for "the-stack-subset-py-js-java-450k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dominguesm | null | null | null | false | 4 | false | dominguesm/restore-punctuation-ptbr-dataset | 2022-11-11T19:18:00.000Z | null | false | 299b521fd404b5e4b742666d7e0affa17a3a2171 | [] | [
"language:pt",
"source_datasets:extended|wiki_lingua",
"license:cc-by-3.0"
] | https://huggingface.co/datasets/dominguesm/restore-punctuation-ptbr-dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: OU
1: OO
2: .O
3: '!O'
4: ',O'
5: .U
6: '!U'
7: ',U'
8: :O
9: ;O
10: :U
11: '''O'
12: -O
13: ?O
14: ?U
splits:
- name: train
num_bytes: 383051099.2372526
num_examples: 22514
- name: test
num_bytes: 47894147.83480795
num_examples: 2815
- name: valid
num_bytes: 47877133.92793945
num_examples: 2814
download_size: 69187312
dataset_size: 478822381.0
pretty_name: wikilingua-ptbr-for-restore-punctuation
language:
- pt
source_datasets:
- extended|wiki_lingua
license:
- cc-by-3.0
---
# "restore-punctuation-ptbr-dataset"
Dataset [`wiki_lingua`](https://huggingface.co/datasets/wiki_lingua) prepared for punctuation restoration in Portuguese.
This dataset was built with the objective of training the model [`dominguesm/bert-restore-punctuation-ptbr`](https://huggingface.co/dominguesm/bert-restore-punctuation-ptbr).
## FEATURES
```
Sequence(
feature=ClassLabel(
names=[
'OU',
'OO',
'.O',
'!O',
',O',
'.U',
'!U',
',U',
':O',
';O',
':U',
"'O",
'-O',
'?O',
'?U'],
id=None),
length=-1,
id=None
)
``` |
vidhikatkoria | null | null | null | false | 30 | false | vidhikatkoria/FewShotSGD | 2022-11-11T19:12:01.000Z | null | false | b4c532908d2439912f9d6d9e0d9d14f8cad898f9 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/FewShotSGD/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
splits:
- name: test
num_bytes: 7583282
num_examples: 15537
- name: train
num_bytes: 46458280
num_examples: 83391
- name: validation
num_bytes: 6337305
num_examples: 11960
download_size: 6517762
dataset_size: 60378867
---
# Dataset Card for "FewShotSGD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ozanba | null | null | null | false | 12 | false | ozanba/final_project_dataset | 2022-11-11T19:53:27.000Z | null | false | 0aec2affa89af3a1c8c7700e07e3379b58616804 | [] | [
"license:other"
] | https://huggingface.co/datasets/ozanba/final_project_dataset/resolve/main/README.md | ---
license: other
---
|
Tristan | null | null | null | false | null | false | Tristan/olm-wikipedia-20221101-kl-language | 2022-11-11T19:32:33.000Z | null | false | ac8a3794d6fb430352b38c459b1d77f49f154b60 | [] | [] | https://huggingface.co/datasets/Tristan/olm-wikipedia-20221101-kl-language/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 311164
num_examples: 297
download_size: 191198
dataset_size: 311164
---
# Dataset Card for "olm-wikipedia-20221101-kl-language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mozel | null | null | null | false | null | false | mozel/Teste | 2022-11-11T19:59:43.000Z | null | false | 2e0a48fb4a7a928800358270f9601420004a0b95 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/mozel/Teste/resolve/main/README.md | ---
license: openrail
---
|
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/PubMed-filenames-2022-11-11 | 2022-11-11T22:51:25.000Z | null | false | c277ecacc63812a3aa4cbb97ae0162b0f5c9009f | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/PubMed-filenames-2022-11-11/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 72410
num_examples: 1114
download_size: 0
dataset_size: 72410
---
# Dataset Card for "PubMed-filenames-2022-11-11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erickdp | null | null | null | false | null | false | erickdp/autotrain-data-tweet-es-sent | 2022-11-14T09:01:25.000Z | null | false | c755ed348bdb9c918a0ea6a316d9e0f92ec60de6 | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/erickdp/autotrain-data-tweet-es-sent/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: tweet-es-sent
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tweet-es-sent.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 1,
"text": "1sola vuelta! arauz presidente! 1sola vuelta! todo 1 1sola la 1 es ecdor! por ti!1 por 1 los tuyos!1 por nosotros juntos1 mas de 45 d apoyo popular el 7 se vota 1por la vida por el futuro,por la esperanza guayaquil ec dor es 1"
},
{
"target": 1,
"text": "excelente decisi\u00f3n , las mujeres son importantes y por esa raz\u00f3n, a productos de primera necesidad hay que quitarles el iva "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=3, names=['0', '1', '2'], id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 12400 |
| valid | 3685 |
|
vidhikatkoria | null | null | null | false | 22 | false | vidhikatkoria/SGD_Restaurants | 2022-11-12T01:03:51.000Z | null | false | 40c25b3ac611c6f19077f648714f2a8b29b18a37 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Restaurants/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 584535.0514154871
num_examples: 961
- name: train
num_bytes: 2337531.948584513
num_examples: 3843
download_size: 1166735
dataset_size: 2922067.0
---
# Dataset Card for "SGD_Restaurants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Media | 2022-11-12T01:04:27.000Z | null | false | 29cede5e47d09bc01ba7032fb0f87bf181f4f718 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Media/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 225166.11034244206
num_examples: 579
- name: train
num_bytes: 899108.889657558
num_examples: 2312
download_size: 406682
dataset_size: 1124275.0
---
# Dataset Card for "SGD_Media"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Events | 2022-11-12T01:05:04.000Z | null | false | 614d0cd150e3599cb75185e975e409c56bdbb8d5 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Events/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 989580.0027945971
num_examples: 1718
- name: train
num_bytes: 3957167.9972054027
num_examples: 6870
download_size: 2035503
dataset_size: 4946748.0
---
# Dataset Card for "SGD_Events"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Music | 2022-11-12T01:05:40.000Z | null | false | fa64c0358e97306d2fddafd722a2ba393afda2a3 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Music/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 304456.9398689085
num_examples: 702
- name: train
num_bytes: 1217394.0601310914
num_examples: 2807
download_size: 590980
dataset_size: 1521851.0
---
# Dataset Card for "SGD_Music"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Movies | 2022-11-12T01:06:16.000Z | null | false | 41d5a260cf3064ef3e175f8c0183367ac104aa16 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Movies/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 316048.3633314701
num_examples: 716
- name: train
num_bytes: 1263310.6366685298
num_examples: 2862
download_size: 587470
dataset_size: 1579359.0
---
# Dataset Card for "SGD_Movies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Flights | 2022-11-12T01:06:54.000Z | null | false | 860045a8e0eb2cf9f4f10f4edb7c0fde46bffa1d | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Flights/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 1317271.3539978522
num_examples: 2049
- name: train
num_bytes: 5267799.646002147
num_examples: 8194
download_size: 2561006
dataset_size: 6585071.0
---
# Dataset Card for "SGD_Flights"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dlwh | null | null | null | false | null | false | dlwh/Multi_Legal_Pile | 2022-11-11T21:17:05.000Z | null | false | 9782d4289e2422909272c3942c0a53e4fa0fe3a9 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/dlwh/Multi_Legal_Pile/resolve/main/README.md | ---
license: cc-by-4.0
---
|
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_RideSharing | 2022-11-12T01:07:30.000Z | null | false | 7cfc423fea6e5f07f24d94a1d5022eed1d800156 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_RideSharing/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 102167.40786240787
num_examples: 245
- name: train
num_bytes: 407001.59213759215
num_examples: 976
download_size: 189863
dataset_size: 509169.0
---
# Dataset Card for "SGD_RideSharing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_RentalCars | 2022-11-12T01:08:06.000Z | null | false | 6519bc3b21df40975e6f6b33a939b61e17ba8845 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_RentalCars/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 355068.3830966401
num_examples: 578
- name: train
num_bytes: 1418430.61690336
num_examples: 2309
download_size: 651687
dataset_size: 1773499.0
---
# Dataset Card for "SGD_RentalCars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Buses | 2022-11-12T01:08:42.000Z | null | false | 20c773a79723274f34e8c78ed43632d7190de861 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Buses/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 385564.3779826464
num_examples: 738
- name: train
num_bytes: 1541212.6220173535
num_examples: 2950
download_size: 721017
dataset_size: 1926777.0
---
# Dataset Card for "SGD_Buses"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Hotels | 2022-11-12T01:09:19.000Z | null | false | b2d834a430d45832f6aeff00ca1a7994edf677ff | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Hotels/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 705268.6170212766
num_examples: 1204
- name: train
num_bytes: 2818731.3829787234
num_examples: 4812
download_size: 1489969
dataset_size: 3524000.0
---
# Dataset Card for "SGD_Hotels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Services | 2022-11-12T01:09:57.000Z | null | false | 07265c06762ed211716fe0a53a347bddbd02879f | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Services/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 667276.6853235809
num_examples: 1258
- name: train
num_bytes: 2668576.3146764194
num_examples: 5031
download_size: 1366314
dataset_size: 3335853.0
---
# Dataset Card for "SGD_Services"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Homes | 2022-11-12T01:10:33.000Z | null | false | fb24079b7bdd13068935cf75db6439c456c748e8 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Homes/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 461617.94436869374
num_examples: 734
- name: train
num_bytes: 1844585.0556313063
num_examples: 2933
download_size: 871199
dataset_size: 2306203.0
---
# Dataset Card for "SGD_Homes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Banks | 2022-11-12T01:11:09.000Z | null | false | 72a18e094b84e2793d043ef162d163e089db6e96 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Banks/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 181062.11590566984
num_examples: 399
- name: train
num_bytes: 723340.8840943301
num_examples: 1594
download_size: 305912
dataset_size: 904403.0
---
# Dataset Card for "SGD_Banks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vidhikatkoria | null | null | null | false | null | false | vidhikatkoria/SGD_Calendar | 2022-11-12T01:11:44.000Z | null | false | 324efd1c0b9c7ea606e58dec86daecdf06414900 | [] | [] | https://huggingface.co/datasets/vidhikatkoria/SGD_Calendar/resolve/main/README.md | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 107625.80130293159
num_examples: 246
- name: train
num_bytes: 429628.1986970684
num_examples: 982
download_size: 195681
dataset_size: 537254.0
---
# Dataset Card for "SGD_Calendar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
katossky | null | null | null | false | null | false | katossky/multi-domain-sentiment | 2022-11-11T21:45:41.000Z | null | false | b15129889c9667380958dad75185c1d22d46b262 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/katossky/multi-domain-sentiment/resolve/main/README.md | ---
license: unknown
---
This sentiment dataset was used in the paper: John Blitzer, Mark Dredze, Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. Association of Computational Linguistics (ACL), 2007.
The author asks, if you use this data for your research or a publication, to cite the above paper as the reference for the data, and to inform him about the reuse.
The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from 4 product types (domains): Kitchen, Books, DVDs, and Electronics. Each domain has several thousand reviews, but the exact number varies by domain. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed.
The directory contains 3 files called positive.review, negative.review and unlabeled.review. While the positive and negative files contain positive and negative reviews, these aren't necessarily the splits the authors used in the experiments. They randomly drew from the three files ignoring the file names. Each file contains a pseudo XML scheme for encoding the reviews. Most of the fields are self explanatory. The reviews have a "unique ID" field that isn't very unique. If it has two unique id fields, ignore the one containing only a number. |
katossky | null | null | null | false | 86 | false | katossky/multi-domain-sentiment-books | 2022-11-12T00:33:47.000Z | null | false | 6370e17ad58977b2f814fb28f4f5a93c26258081 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/katossky/multi-domain-sentiment-books/resolve/main/README.md | ---
license: unknown
---
|
nbalepur | null | null | null | false | 99 | false | nbalepur/cs_history_wiki_web_noise | 2022-11-12T03:22:05.000Z | null | false | 7599bd62f57c23038928a7dc3ba7b359b9293513 | [] | [] | https://huggingface.co/datasets/nbalepur/cs_history_wiki_web_noise/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: section
dtype: string
- name: information
sequence: string
- name: output
dtype: string
- name: output_aug
dtype: string
splits:
- name: test
num_bytes: 7492865
num_examples: 100
- name: train
num_bytes: 26314636
num_examples: 400
download_size: 17663209
dataset_size: 33807501
---
# Dataset Card for "cs_history_wiki_web_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BridgeQZH | null | null | null | false | null | false | BridgeQZH/xi_diversity | 2022-11-11T22:55:28.000Z | null | false | 147769faebba4155d65ba1f7bbef07992fc5a046 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/BridgeQZH/xi_diversity/resolve/main/README.md | ---
license: openrail
---
|
ndorr16 | null | null | null | false | null | false | ndorr16/ToyTruck | 2022-11-11T23:08:16.000Z | null | false | ca32b24bb52db3e9c5ebfa32caba60ff2a079f3d | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/ndorr16/ToyTruck/resolve/main/README.md | ---
license: gpl-3.0
---
|
Allenbv | null | null | null | false | null | false | Allenbv/Jojos-bizarre-diffusiondataset | 2022-11-11T23:13:23.000Z | null | false | 76e370b9e10ffb847b2426ac0a64f3858cc76a12 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Allenbv/Jojos-bizarre-diffusiondataset/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
FAERS-PubMed | null | null | null | false | 3 | false | FAERS-PubMed/full-dataset-latest-experimenting | 2022-11-11T23:28:44.000Z | null | false | 0c483fe0202ef8d6d8b83a793866ca501d07368b | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/full-dataset-latest-experimenting/resolve/main/README.md | ---
dataset_info:
features:
- name: article_articletitle
dtype: string
- name: article_pmid
dtype: string
- name: article_abstract
dtype: string
- name: article_authorlist
list:
- name: CollectiveName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: LastName
dtype: string
- name: Suffix
dtype: string
- name: article_journalinfo
dtype: string
- name: article_datecompleted
dtype: string
- name: article_daterevised
dtype: string
- name: article_keywords
sequence: string
- name: article_mesh
list:
- name: DescriptorName
dtype: string
- name: QualifierName
sequence: string
- name: article_pubmed_filename
dtype: string
- name: report_literaturereference
dtype: string
- name: report_safetyreportid
dtype: string
- name: report_receivedate
dtype: string
- name: report_patient
struct:
- name: drug
list:
- name: actiondrug
dtype: string
- name: activesubstance
struct:
- name: activesubstancename
dtype: string
- name: drugadditional
dtype: string
- name: drugadministrationroute
dtype: string
- name: drugauthorizationnumb
dtype: string
- name: drugbatchnumb
dtype: string
- name: drugcharacterization
dtype: string
- name: drugcumulativedosagenumb
dtype: string
- name: drugcumulativedosageunit
dtype: string
- name: drugdosageform
dtype: string
- name: drugdosagetext
dtype: string
- name: drugenddate
dtype: string
- name: drugenddateformat
dtype: string
- name: drugindication
dtype: string
- name: drugintervaldosagedefinition
dtype: string
- name: drugintervaldosageunitnumb
dtype: string
- name: drugrecurreadministration
dtype: string
- name: drugseparatedosagenumb
dtype: string
- name: drugstartdate
dtype: string
- name: drugstartdateformat
dtype: string
- name: drugstructuredosagenumb
dtype: string
- name: drugstructuredosageunit
dtype: string
- name: drugtreatmentduration
dtype: string
- name: drugtreatmentdurationunit
dtype: string
- name: medicinalproduct
dtype: string
- name: patientagegroup
dtype: string
- name: patientonsetage
dtype: string
- name: patientonsetageunit
dtype: string
- name: patientsex
dtype: string
- name: patientweight
dtype: string
- name: reaction
list:
- name: reactionmeddrapt
dtype: string
- name: reactionmeddraversionpt
dtype: string
- name: reactionoutcome
dtype: string
- name: summary
struct:
- name: narrativeincludeclinical
dtype: string
- name: report_transmissiondate
dtype: string
- name: report_seriousness
struct:
- name: serious
dtype: string
- name: seriousnesscongenitalanomali
dtype: string
- name: seriousnessdeath
dtype: string
- name: seriousnessdisabling
dtype: string
- name: seriousnesshospitalization
dtype: string
- name: seriousnesslifethreatening
dtype: string
- name: seriousnessother
dtype: string
- name: report_faers_filename
dtype: string
- name: label_seriousness_serious
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
splits:
- name: test
num_bytes: 287342030
num_examples: 104024
- name: train
num_bytes: 1299820108
num_examples: 484858
- name: validation
num_bytes: 177814156
num_examples: 65596
download_size: 622113009
dataset_size: 1764976294
---
# Dataset Card for "full-dataset-latest-experimenting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JM138 | null | null | null | false | null | false | JM138/smartwatch | 2022-11-12T00:31:57.000Z | null | false | 58ccfab21f30d9a1a1ceac6c6b5a0440e587edd2 | [] | [] | https://huggingface.co/datasets/JM138/smartwatch/resolve/main/README.md |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.