id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
CrowdAILab/scicap | 2023-08-20T20:00:14.000Z | [
"license:cc-by-nc-sa-4.0",
"arxiv:2301.12293",
"region:us"
] | CrowdAILab | null | null | 4 | 3 | 2023-05-17T21:01:09 | ---
license: cc-by-nc-sa-4.0
---
# The 1st Scientific Figure Captioning (SciCap) Challenge 📖📊
Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! 🎉 This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. 💻
*Challenge [homepage](http://SciCap.AI) 🏠*
## Challenge Overview 🌟
The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) 🇫🇷. Participants are required to submit the generated captions for a hidden test set for evaluation.
The challenge is divided into two phases:
- **Test Phase (2.5 months):** Use the provided training set, validation set, and public test set to build and test the models.
- **Challenge Phase (2 weeks):** Submit results for a hidden test set that will be released before the submission deadline.
Winning teams will be determined based on their results for the hidden test set 🏆. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.
## Dataset Overview and Download 📚
The SciCap dataset contains an expanded version of the [original SciCap](https://aclanthology.org/2021.findings-emnlp.277.pdf) dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics 📊. Additionally, it covers data from ACL Anthology papers [ACL-Fig](https://arxiv.org/pdf/2301.12293.pdf).
You can download the dataset using the following command:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset')
```
_Merge all image split files into one_ 🧩
```
zip -F img-split.zip --out img.zip
```
The dataset schema is similar to the `mscoco` dataset:
- **images:** two separated folders - arXiv and acl figures 📁
- **annotations:** JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) 📝
## Evaluation and Submission 📩
You have to submit your generated captions in JSON format as shown below:
```json
[
{
"image_id": int,
"caption": "PREDICTED CAPTION STRING"
},
{
"image_id": int,
"caption": "PREDICTED CAPTION STRING"
}
...
]
```
Submit your results using this [challenge link](https://eval.ai/web/challenges/challenge-page/2012/overview) 🔗. Participants must register on [Eval.AI](http://Eval.AI) to access the leaderboard and submit results.
**Please note:** Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.
## Technical Report Submission 🗒️
All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template 📄. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.
Good luck with your participation in the 1st SciCap Challenge! 🍀🎊 | 3,395 | [
[
-0.032440185546875,
-0.008636474609375,
0.00955963134765625,
0.031707763671875,
-0.038482666015625,
0.01800537109375,
0.01483154296875,
-0.04937744140625,
0.01364898681640625,
0.035125732421875,
-0.05291748046875,
-0.045135498046875,
-0.052734375,
0.03347778... |
Yuchong/us-liver | 2023-05-18T00:40:35.000Z | [
"region:us"
] | Yuchong | null | null | 0 | 3 | 2023-05-18T00:40:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 2359828.0
num_examples: 8
download_size: 366395
dataset_size: 2359828.0
---
# Dataset Card for "us-liver"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 383 | [
[
-0.03472900390625,
-0.0115509033203125,
0.0081329345703125,
0.00989532470703125,
-0.0310821533203125,
-0.003513336181640625,
0.026214599609375,
-0.02239990234375,
0.0689697265625,
0.042266845703125,
-0.0450439453125,
-0.048736572265625,
-0.021759033203125,
0... |
bleugreen/typescript-chunks | 2023-05-18T04:27:24.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"region:us"
] | bleugreen | null | null | 0 | 3 | 2023-05-18T02:13:26 | ---
task_categories:
- text-classification
- text2text-generation
- summarization
language:
- en
---
# typescript-chunks
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
FunctionDeclaration ---- 8205
ArrowFunction --------- 33890
ClassDeclaration ------- 5325
InterfaceDeclaration -- 12884
EnumDeclaration --------- 518
TypeAliasDeclaration --- 3580
MethodDeclaration ----- 24713
```
- Leading comments are added to the front of `content`
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions / summaries with `gpt-3.5-turbo` (in progress)
# Dataset Structure
```python
from datasets import load_dataset
load_dataset("bleugreen/typescript-chunks")
DatasetDict({
train: Dataset({
features: ['type', 'content', 'repo', 'path', 'language'],
num_rows: 89115
})
})
``` | 1,074 | [
[
-0.022735595703125,
-0.029052734375,
0.0305633544921875,
0.01690673828125,
-0.0280914306640625,
0.02374267578125,
-0.01403045654296875,
-0.0011548995971679688,
0.04193115234375,
0.061492919921875,
-0.042266845703125,
-0.043365478515625,
-0.051666259765625,
0... |
AlekseyKorshuk/lmeh-chai-synthetic | 2023-05-18T21:51:02.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 0 | 3 | 2023-05-18T21:50:23 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: test
num_bytes: 47839521
num_examples: 4570
download_size: 15451629
dataset_size: 47839521
---
# Dataset Card for "lmeh-chai-synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 437 | [
[
-0.03729248046875,
-0.03826904296875,
0.025543212890625,
0.005947113037109375,
-0.008880615234375,
0.01461029052734375,
0.014404296875,
-0.031768798828125,
0.07537841796875,
0.032562255859375,
-0.0740966796875,
-0.043304443359375,
-0.020416259765625,
-0.0083... |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/978d0222 | 2023-05-19T08:13:27.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | 0 | 3 | 2023-05-19T08:13:26 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1337
dataset_size: 188
---
# Dataset Card for "978d0222"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 369 | [
[
-0.043212890625,
-0.00333404541015625,
0.0205230712890625,
0.0232391357421875,
-0.0137176513671875,
-0.01410675048828125,
0.034423828125,
-0.003185272216796875,
0.047210693359375,
0.04205322265625,
-0.05767822265625,
-0.04278564453125,
-0.040069580078125,
-0... |
Cheetor1996/Kotone_Shirakawa | 2023-05-19T23:03:41.000Z | [
"language:en",
"license:cc-by-2.0",
"art",
"region:us"
] | Cheetor1996 | null | null | 0 | 3 | 2023-05-19T22:31:16 | ---
license: cc-by-2.0
language:
- en
tags:
- art
pretty_name: Kotone Shirakawa
---
**Kotone Shirakawa from Overflow (hentai anime)**
- *Trained with anime (full-final-pruned) model.*
- *Best resultd with ALL and OUTALL LoRA weight blocks, and with 0.4 to 0.7* weights.*
- *5 versions; 6, 7, 8, 9, and 10 epochs.* | 315 | [
[
-0.0213470458984375,
-0.037994384765625,
0.04071044921875,
0.0171356201171875,
-0.03411865234375,
-0.00476837158203125,
0.00292205810546875,
-0.0245819091796875,
0.03997802734375,
0.06524658203125,
-0.035552978515625,
-0.03887939453125,
-0.0623779296875,
-0.... |
Dampish/sharegpt-alpaca-unfiltered-94k | 2023-05-20T02:15:04.000Z | [
"region:us"
] | Dampish | null | null | 2 | 3 | 2023-05-20T02:14:44 | ---
dataset_info:
features:
- name: output
dtype: string
- name: id
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 629328864
num_examples: 94145
download_size: 263823762
dataset_size: 629328864
---
# Dataset Card for "sharegpt-alpaca-unfiltered-94k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.055572509765625,
-0.017974853515625,
0.0099639892578125,
0.024688720703125,
-0.052520751953125,
-0.006679534912109375,
0.01531219482421875,
-0.0157318115234375,
0.0712890625,
0.04669189453125,
-0.07208251953125,
-0.053131103515625,
-0.06884765625,
-0.0246... |
0x22almostEvil/semantics-ws-qna-oa | 2023-05-21T07:08:16.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"language:de",
"language:it",
"license:apache-2.0",
"semantics",
"arxiv:1508.00106",
"region:us"
] | 0x22almostEvil | null | null | 0 | 3 | 2023-05-20T09:51:10 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
- ru
- de
- it
tags:
- semantics
size_categories:
- 1K<n<10K
---
# Dataset Card for semantics-ws-qna-oa with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains parquet of INSTRUCTION, RESPONSE, SOURCE and METADATA.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf | 490 | [
[
-0.025421142578125,
-0.032470703125,
0.0228424072265625,
0.0163726806640625,
-0.0290679931640625,
-0.0312347412109375,
-0.0001004338264465332,
-0.0279693603515625,
0.02020263671875,
0.052947998046875,
-0.046234130859375,
-0.046539306640625,
-0.0433349609375,
... |
voidful/MuSiQue | 2023-05-20T16:43:22.000Z | [
"region:us"
] | voidful | null | null | 0 | 3 | 2023-05-20T16:38:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tarungupta83/MidJourney_v5_Prompt_dataset | 2023-05-21T14:46:19.000Z | [
"license:apache-2.0",
"region:us"
] | tarungupta83 | null | null | 11 | 3 | 2023-05-21T14:37:31 | ---
license: apache-2.0
---
Dataset contain raw prompts from Mid Journey v5
Total Records : 4245117
Sample Data
| AuthorID | Author | Date | Content | Attachments | Reactions |
| --- | --- | --- | --- | --- | --- |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | benjamin frankling with rayban sunglasses reflecting a usa flag walking on a side of penguin, whit... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276830525538494/vanDyke_benjamin_frank...) | |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | Street vendor robot in 80's Poland, meat market, fruit stall, communist style, real photo, real ph... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276841426526290/alepasztet_Street_vend...) | |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | one of the guys is looking at another man , in the style of kris knight, realistic, detailed rende... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276845394333818/iflwlou_one_of_the_guy...) | |
You can clean the data with the help of Data Clean Notebook Provided in the Dataset.
| 1,168 | [
[
-0.0215606689453125,
-0.038970947265625,
0.05584716796875,
-0.00479888916015625,
-0.01136016845703125,
0.00453948974609375,
-0.002719879150390625,
-0.0199737548828125,
0.040252685546875,
0.046295166015625,
-0.099365234375,
-0.04754638671875,
-0.0239715576171875,... |
agmmnn/turkish-thesaurus-synonyms-antonyms | 2023-05-21T20:26:55.000Z | [
"multilinguality:monolingual",
"language:tr",
"license:cc-by-sa-4.0",
"thesaurus",
"dictionary",
"turkish",
"region:us"
] | agmmnn | null | null | 1 | 3 | 2023-05-21T20:04:34 | ---
license: cc-by-sa-4.0
language:
- tr
multilinguality:
- monolingual
pretty_name: Turkish Thesaurus
tags:
- thesaurus
- dictionary
- turkish
---
# Turkish Thesaurus (Türkçe Eş-Zıt Anlam Sözlüğü)
Turkish synonym, antonym thesaurus. Final thesaurus contains 33587 keys in total.
```py
from datasets import load_dataset
dataset = load_dataset("agmmnn/turkish-thesaurus-synonyms-antonyms")
print(dataset['train'][0])
``` | 426 | [
[
-0.037841796875,
-0.01096343994140625,
0.006732940673828125,
-0.005802154541015625,
-0.05535888671875,
-0.0263824462890625,
-0.0163726806640625,
0.0238494873046875,
0.02642822265625,
0.0204925537109375,
-0.055755615234375,
-0.027557373046875,
-0.0474853515625,
... |
ProfessorBob/relation_extraction | 2023-07-26T21:21:06.000Z | [
"region:us"
] | ProfessorBob | null | null | 0 | 3 | 2023-05-22T15:19:04 | ---
dataset_info:
features:
- name: triplets
sequence: string
- name: passage
dtype: string
- name: label
dtype: string
- name: label_id
dtype: int64
- name: synonyms
sequence: string
- name: __index_level_1__
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 179763173
num_examples: 123169
download_size: 66368928
dataset_size: 179763173
---
# Data points per relation :
| | Count |
|---------------------------------------------------|-------|
| instance of | 8461 |
| occupation | 3552 |
| place of birth | 1980 |
| family name | 1977 |
| given name | 1886 |
| country | 1731 |
| country of citizenship | 1677 |
| has part(s) | 1639 |
| educated at | 1457 |
| shares border with | 1329 |
| sex or gender | 1326 |
| award received | 1313 |
| genre | 1285 |
| contains the administrative territorial entity | 1094 |
| child | 1080 |
| located in the administrative territorial entity | 994 |
| participant in | 984 |
| position held | 966 |
| spouse | 956 |
| sibling | 905 |
| place of death | 886 |
| partially coincident with | 761 |
| father | 748 |
| member of | 738 |
| sport | 675 |
| notable work | 657 |
| field of work | 612 |
| mother | 569 |
| languages spoken, written or signed | 560 |
| country of origin | 545 |
| facet of | 528 |
| conflict | 528 |
| member of sports team | 522 |
| part of | 505 |
| present in work | 485 |
| has effect | 468 |
| place of burial | 420 |
| named after | 419 |
| location | 407 |
| significant event | 371 |
| characters | 367 |
| subclass of | 359 |
| manner of death | 359 |
| headquarters location | 352 |
| director | 348 |
| participant | 345 |
| employer | 338 |
| uses | 319 |
| religion or worldview | 315 |
| has use | 304 |
| noble title | 303 |
| language used | 302 |
| nominated for | 301 |
| has works in the collection | 298 |
| opposite of | 292 |
| family | 291 |
| different from | 287 |
| native language | 286 |
| capital | 283 |
| founded by | 273 |
| work location | 272 |
| residence | 269 |
| language of work or name | 263 |
| member of political party | 262 |
| platform | 257 |
| applies to jurisdiction | 255 |
| cause of death | 249 |
| owned by | 235 |
| military branch | 232 |
| student of | 226 |
| composer | 221 |
| cause | 219 |
| continent | 219 |
| screenwriter | 219 |
| performer | 215 |
| military rank | 214 |
| main subject | 210 |
| relative | 207 |
| creator | 193 |
| depicts | 191 |
| head of government | 190 |
| industry | 189 |
| producer | 187 |
| has quality | 182 |
| form of creative work | 181 |
| record label | 181 |
| operator | 177 |
| has contributing factor | 176 |
| replaces | 174 |
| student | 173 |
| developer | 173 |
| color | 172 |
| country for sport | 172 |
| said to be the same as | 166 |
| writing language | 165 |
| sports discipline competed in | 163 |
| based on | 162 |
| instrument | 161 |
| topic's main category | 159 |
| participating team | 157 |
| followed by | 157 |
| production company | 155 |
| ethnic group | 151 |
| office held by head of government | 144 |
| league | 143 |
| original language of film or TV show | 143 |
| has subsidiary | 143 |
| architect | 141 |
| victory | 141 |
| has part(s) of the class | 135 |
| located in/on physical feature | 132 |
| time period | 132 |
| part of the series | 131 |
| made from material | 128 |
| author | 125 |
| heritage designation | 120 |
| location of formation | 118 |
| allegiance | 117 |
| parent organization | 115 |
| narrative location | 114 |
| capital of | 112 |
| manufacturer | 111 |
| product or material produced | 110 |
| replaced by | 110 |
| position played on team / speciality | 109 |
| taxon rank | 108 |
| tracklist | 107 |
| consecrator | 106 |
| twinned administrative body | 105 |
| found in taxon | 104 |
| winner | 101 |
| connects with | 96 |
| parent taxon | 95 |
| original broadcaster | 95 |
| home venue | 94 |
| publisher | 91 |
| discoverer or inventor | 89 |
| has edition or translation | 88 |
| distribution format | 88 |
| legal form | 80 |
| operating system | 79 |
| architectural style | 78 |
| filming location | 77 |
| described by source | 76 |
| medical condition | 73 |
| subject has role | 71 |
| movement | 71 |
| lyrics by | 70 |
| organizer | 70 |
| competition class | 67 |
| chairperson | 67 |
| presenter | 65 |
| located in protected area | 64 |
| religious order | 64 |
| academic degree | 63 |
| media franchise | 63 |
| candidate | 62 |
| head coach | 61 |
| candidacy in election | 59 |
| transport network | 58 |
| has immediate cause | 58 |
| category of associated people | 57 |
| follows | 55 |
| affiliation | 52 |
| legislated by | 51 |
| copyright license | 49 |
| connecting line | 49 |
| contributor to the creative work or subject | 47 |
| connecting service | 47 |
| country of registry | 46 |
| start point | 42 |
| collection | 39 |
| exhibition history | 38 |
| located on street | 37 |
| season | 36 |
| indigenous to | 36 |
| place of publication | 36 |
| contains settlement | 35 |
| voice actor | 34 |
| distributed by | 34 |
| film editor | 33 |
| archives at | 32 |
| foundational text | 32 |
| owner of | 32 |
| sponsor | 31 |
| mountain range | 30 |
| place of detention | 29 |
| day of week | 28 |
| ancestral home | 27 |
| occupant | 27 |
| location of creation | 26 |
| game mode | 26 |
| state of use | 25 |
| adjacent station | 25 |
| writing system | 24 |
| crosses | 24 |
| honorific prefix | 21 |
| dedicated to | 20 |
| amended by | 20 |
| director of photography | 18 |
| copyright status | 18 |
| published in | 17 |
| is a list of | 17 |
| maintained by | 16 |
| commemorates | 14 |
| repealed by | 14 |
| sports season of league or competition | 12 |
| editor | 12 |
| voice type | 12 |
| category for people born here | 11 |
| associated electoral district | 11 |
| topic's main template | 10 |
| fabrication method | 10 |
| does not have cause | 10 |
| addressee | 10 |
| has facility | 10 |
| endemic to | 9 |
| cause of destruction | 9 |
| general classification of race participants | 8 |
| state of conservation | 8 |
| artist files at | 8 |
| terminus location | 8 |
| related category | 8 |
| terminus | 6 |
| referee | 5 |
| significant place | 4 |
| hotel rating | 3 |
| access restriction status | 2 |
| associated cadastral district | 1 |
| appears in the heritage monument list | 1 |
| category for ship name | 1 |
| study type | 1 |
| online access status | 1 |
| diel cycle | 1 |
| copyright status as a creator | 1 |
| taxon synonym | 1 | | 15,225 | [
[
-0.02142333984375,
-0.01422119140625,
0.0278167724609375,
0.015045166015625,
-0.00873565673828125,
0.0233612060546875,
0.0174407958984375,
-0.01099395751953125,
0.0634765625,
0.0380859375,
-0.049957275390625,
-0.0684814453125,
-0.06256103515625,
0.0192565917... |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/910e5ae6 | 2023-05-22T17:24:53.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | 0 | 3 | 2023-05-22T17:24:51 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1340
dataset_size: 184
---
# Dataset Card for "910e5ae6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 369 | [
[
-0.04669189453125,
0.0030193328857421875,
0.0212860107421875,
0.0204620361328125,
-0.0081024169921875,
-0.022125244140625,
0.0322265625,
-0.028533935546875,
0.06683349609375,
0.0260009765625,
-0.05914306640625,
-0.049102783203125,
-0.035614013671875,
0.00757... |
EulerianKnight/breast-histopathology-images-train-test-valid-split | 2023-05-22T17:45:55.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | EulerianKnight | null | null | 0 | 3 | 2023-05-22T17:29:52 | ---
license: apache-2.0
task_categories:
- image-classification
size_categories:
- 100K<n<1M
---
# Breast Histopathology Image dataset
- This dataset is just a rearrangement of the Original dataset at Kaggle: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images
- Data Citation: https://www.ncbi.nlm.nih.gov/pubmed/27563488 , http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872
- The original dataset has structure:
<pre>
|-- patient_id
|-- class(0 and 1)
</pre>
- The present dataset has following structure:
<pre>
|-- train
|-- class(0 and 1)
|-- valid
|-- class(0 and 1)
|-- test
|-- class(0 and 1) | 703 | [
[
-0.0214996337890625,
-0.0302276611328125,
0.0303497314453125,
-0.00415802001953125,
-0.02093505859375,
-0.006496429443359375,
0.0296783447265625,
-0.0036182403564453125,
0.03131103515625,
0.059661865234375,
-0.06842041015625,
-0.059295654296875,
-0.0556030273437... |
wyxu/dataset_copied | 2023-05-25T07:45:47.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | wyxu | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 0 | 3 | 2023-05-23T03:55:20 | ---
task_categories:
- image-classification
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A copied data set from CIFAR10 as a demonstration
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,441 | [
[
-0.034149169921875,
-0.0238189697265625,
-0.00392913818359375,
0.027008056640625,
-0.0161895751953125,
0.0154571533203125,
-0.0198822021484375,
-0.0194244384765625,
0.03692626953125,
0.04296875,
-0.047882080078125,
-0.07244873046875,
-0.049957275390625,
0.00... |
coeuslearning/yelp_review_full | 2023-05-23T05:18:23.000Z | [
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | coeuslearning | The Yelp reviews dataset consists of reviews from Yelp. It is extracted from the Yelp Dataset Challenge 2015 data.
The Yelp reviews full star dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset.
It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun.
Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). | @inproceedings{zhang2015character,
title={Character-level convolutional networks for text classification},
author={Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
booktitle={Advances in neural information processing systems},
pages={649--657},
year={2015}
} | 0 | 3 | 2023-05-23T03:57:11 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
splits:
- name: train
num_bytes: 483811554
num_examples: 650000
- name: test
num_bytes: 37271188
num_examples: 50000
download_size: 322952367
dataset_size: 521082742
task_categories:
- conversational
- text2text-generation
- question-answering
language:
- en
pretty_name: coeusyelp
size_categories:
- 100K<n<1M
---
# Dataset Card for "yelp_review_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 748 | [
[
-0.0204315185546875,
-0.0219268798828125,
0.031158447265625,
0.0126495361328125,
-0.014739990234375,
-0.012969970703125,
0.0142974853515625,
-0.0193328857421875,
0.0675048828125,
0.04962158203125,
-0.064697265625,
-0.0482177734375,
-0.0168914794921875,
-0.01... |
zhanghanchong/css | 2023-07-24T07:51:45.000Z | [
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-4.0",
"arxiv:2305.15891",
"region:us"
] | zhanghanchong | null | \ | 1 | 3 | 2023-05-23T08:36:37 | ---
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 1K<n<10K
license: cc-by-4.0
---
# Dataset Description
- **Repository:** https://github.com/X-LANCE/medical-dataset
- **Paper:** https://arxiv.org/abs/2305.15891
# Dataset Summary
CSS is a large-scale cross-schema Chinese text-to-SQL dataset
# Dataset Splits
### Example-based Split
* **train**: 3472 question/SQL pairs
* **dev**: 434 question/SQL pairs
* **test**: 434 question/SQL pairs
### Template-based Split
* **train**: 3470 question/SQL pairs
* **dev**: 430 question/SQL pairs
* **test**: 440 question/SQL pairs
### Schema-based Split
* **train**: 18550 question/SQL pairs
* **dev**: 8150 question/SQL pairs
* **test**: 6920 question/SQL pairs
# Citation Information
@misc{zhang2023css,
title={CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset},
author={Hanchong Zhang and Jieyu Li and Lu Chen and Ruisheng Cao and Yunyan Zhang and Yu Huang and Yefeng Zheng and Kai Yu},
year={2023},
eprint={2305.15891},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
| 1,136 | [
[
-0.0029354095458984375,
-0.0227508544921875,
0.017364501953125,
0.0107574462890625,
-0.033599853515625,
-0.003124237060546875,
-0.01357269287109375,
-0.01146697998046875,
0.041656494140625,
0.0250701904296875,
-0.049102783203125,
-0.051971435546875,
-0.017593383... |
librarian-bots/card_with_first_commit | 2023-06-27T14:17:14.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:fill-mask",
"size_categories:10K<n<100K",
"language:en",
"model cards",
"region:us"
] | librarian-bots | null | null | 0 | 3 | 2023-05-23T12:33:46 | ---
dataset_info:
features:
- name: modelId
dtype: string
- name: tags
sequence: string
- name: pipeline_tag
dtype: string
- name: config
struct:
- name: architectures
sequence: string
- name: model_type
dtype: string
- name: task_specific_params
struct:
- name: conversational
struct:
- name: max_length
dtype: float64
- name: summarization
struct:
- name: early_stopping
dtype: bool
- name: length_penalty
dtype: float64
- name: max_length
dtype: float64
- name: min_length
dtype: float64
- name: no_repeat_ngram_size
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: text-generation
struct:
- name: do_sample
dtype: bool
- name: max_length
dtype: float64
- name: translation_en_to_de
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: translation_en_to_fr
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: translation_en_to_ro
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: downloads
dtype: int64
- name: first_commit
dtype: timestamp[ns, tz=UTC]
- name: card
dtype: string
splits:
- name: train
num_bytes: 20198907.41971414
num_examples: 30344
download_size: 25260494
dataset_size: 20198907.41971414
task_categories:
- text-classification
- feature-extraction
- fill-mask
language:
- en
tags:
- model cards
pretty_name: Model card READMEs with first commit information
size_categories:
- 10K<n<100K
---
# Dataset Card for "card_with_first_commit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,381 | [
[
-0.03277587890625,
-0.0228118896484375,
0.01552581787109375,
0.007457733154296875,
-0.01806640625,
-0.00403594970703125,
0.018310546875,
0.0034275054931640625,
0.073486328125,
0.0457763671875,
-0.07470703125,
-0.072265625,
-0.041046142578125,
-0.020462036132... |
Eitanli/github-issues | 2023-05-24T10:57:04.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"code",
"region:us"
] | Eitanli | null | null | 0 | 3 | 2023-05-24T10:43:54 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 2600208
num_examples: 215
download_size: 683347
dataset_size: 2600208
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: github_issues
size_categories:
- 1K<n<10K
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,781 | [
[
-0.032012939453125,
-0.02093505859375,
0.0128021240234375,
0.01580810546875,
-0.0071563720703125,
0.0161590576171875,
0.009521484375,
-0.00873565673828125,
0.07073974609375,
0.027191162109375,
-0.057464599609375,
-0.04693603515625,
-0.035675048828125,
-0.018... |
alexjercan/bugnet | 2023-07-26T05:35:52.000Z | [
"region:us"
] | alexjercan | \ | \ | 3 | 3 | 2023-05-24T14:11:29 | ---
dataset_info:
- config_name: Python
features:
- name: problem_id
dtype: string
- name: language
dtype: string
- name: original_status
dtype: string
- name: fail
dtype: string
- name: pass
dtype: string
- name: change
dtype: string
- name: i1
dtype: uint32
- name: i2
dtype: uint32
- name: j1
dtype: uint32
- name: j2
dtype: uint32
- name: error
dtype: string
- name: stderr
dtype: string
- name: stdout
dtype: string
- name: description
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8237153
num_examples: 2557
- name: validation
num_bytes: 3497872
num_examples: 1105
- name: test
num_bytes: 205241
num_examples: 100
download_size: 19290233
dataset_size: 11940266
- config_name: C++
features:
- name: problem_id
dtype: string
- name: language
dtype: string
- name: original_status
dtype: string
- name: fail
dtype: string
- name: pass
dtype: string
- name: change
dtype: string
- name: i1
dtype: uint32
- name: i2
dtype: uint32
- name: j1
dtype: uint32
- name: j2
dtype: uint32
- name: error
dtype: string
- name: stderr
dtype: string
- name: stdout
dtype: string
- name: description
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 482930200
num_examples: 68621
- name: validation
num_bytes: 1129323
num_examples: 125
- name: test
num_bytes: 40048505
num_examples: 4769
download_size: 378900920
dataset_size: 524108028
---
# About the Dataset
The source code used to generate the dataset can be found on
[GitHub](https://github.com/alexjercan/bug-detection/tree/master/bugnet)
The dataset is based on the [CodeNet project](https://github.com/IBM/Project_CodeNet)
and contains Python and C++ code submissions for online coding competitions. The data
is obtained by selecting consecutive attempts of a single user that resulted in fixing a
buggy submission. Thus the data is represented by code pairs and annotated by the diff
and error of each changed instruction. We have already tokenized all the source code
files and kept the same format as in the original dataset.
The upgrade made compared to CodeNetPy is that we only keep one line errors.
This means that the task of bug detection and repair will be easier to manage.
We also removed all the files that fail on linters, so that we are focusing only
on bugs that cannot be identified easily.
The resulting dataset file will be a csv with the following columns:
- `problem_id`: The id of the problem, matches with the id from Project_CodeNet
- `language`: The programming language of the submission (`Python` or `C++`)
- `original_status`: The status of the initial submission (`TLE`, `MLE`, anything that is not `Accepted`)
- `fail`: The initial (buggy) source code formatted (`black` or `clang-fromat`)
- `pass`: The modified (accepted) source code formatted(`black` or `clang-format`
- `change`: The change that was made (`replace`, `insert`, `delete`)
- `i1`: Start of the change in the buggy source (the line; starting with 1)
- `i2`: End of the change in the buggy source (not inclusive; for `insert` we have `i1 == i2`)
- `j1`: Start of the change in the accepted source (the line; starting with 1)
- `j2`: End of the change in the accepted source (not inclusive; for `delete` we have `j1 == j2`)
- `error`: The error that was obtained running the buggy source code on the input/output examples
- `stderr`: The full output of stderr of running the buggy source code on the input/output examples
- `stdout`: The full output of stdout of running the buggy source code on the input/output examples
- `description`: The problem statement in html format
- `input`: The input for the test case
- `output`: The output for the test case
| 3,981 | [
[
-0.0240631103515625,
-0.0267486572265625,
0.006256103515625,
0.01050567626953125,
-0.001918792724609375,
0.0024433135986328125,
-0.0172271728515625,
-0.0194549560546875,
0.03753662109375,
0.03497314453125,
-0.04449462890625,
-0.051910400390625,
-0.03079223632812... |
mmenendezg/raw_pneumonia_x_ray | 2023-07-13T16:53:15.000Z | [
"region:us"
] | mmenendezg | null | null | 0 | 3 | 2023-05-24T19:53:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': normal
'1': pneumonia
splits:
- name: train
num_bytes: 3197295656.864
num_examples: 5232
- name: test
num_bytes: 111133345.0
num_examples: 624
download_size: 1263131638
dataset_size: 3308429001.864
---
# Dataset Card for "raw_pneumonia_x_ray"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 550 | [
[
-0.003108978271484375,
0.00521087646484375,
0.0253143310546875,
0.003314971923828125,
-0.0214385986328125,
-0.00812530517578125,
0.02801513671875,
0.005046844482421875,
0.05426025390625,
0.028076171875,
-0.044952392578125,
-0.050811767578125,
-0.051055908203125,... |
carlesoctav/en-id-parallel-sentences | 2023-05-25T04:20:44.000Z | [
"region:us"
] | carlesoctav | null | null | 0 | 3 | 2023-05-25T04:20:25 | ---
dataset_info:
features:
- name: text_en
dtype: string
- name: text_id
dtype: string
splits:
- name: msmarcoquery
num_bytes: 41010003
num_examples: 500000
- name: combinedtech
num_bytes: 44901963
num_examples: 276659
- name: msmarcocollection
num_bytes: 351086941
num_examples: 500000
- name: TED2020
num_bytes: 32590228
num_examples: 163319
- name: Tatoeba
num_bytes: 797670
num_examples: 10543
- name: NeuLabTedTalks
num_bytes: 19440416
num_examples: 94224
- name: QED
num_bytes: 40115874
num_examples: 274581
- name: tico19
num_bytes: 959990
num_examples: 3071
download_size: 282831590
dataset_size: 530903085
---
# Dataset Card for "en-id-parallel-sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 898 | [
[
-0.044952392578125,
-0.03802490234375,
0.025848388671875,
0.0362548828125,
-0.01323699951171875,
-0.00786590576171875,
-0.0086822509765625,
-0.01450347900390625,
0.05706787109375,
0.0259246826171875,
-0.057861328125,
-0.0533447265625,
-0.039093017578125,
0.0... |
ibm/clinic150-sur | 2023-05-30T11:22:19.000Z | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|clinic150",
"language:en",
"license:mit",
"arxiv:1911.03118",
"arxiv:2305.17750",
"region:us"
] | ibm | null | null | 0 | 3 | 2023-05-25T11:33:55 | ---
license: mit
annotations_creators: other
language_creators: other
language: en
multilinguality: monolingual
size_categories: 100K<n<1M
source_datasets: extended|clinic150
task_categories:
- text-classification
pretty_name: Clinic150-SUR
---
dataset_info:
features:
- name: intent
dtype: string
- name: user_utterance
dtype: string
- name: origin
dtype: string
# Dataset Card for "clinic150-SUR"
### Dataset Summary
The Clinic150-SUR dataset is a novel and augmented dataset designed to simulate natural human behavior during interactions with customer service-like centers.
Extending the [Clinic150 dataset](https://aclanthology.org/D19-1131/), it incorporates two augmentation techniques, including IBM's [LAMBADA](https://arxiv.org/abs/1911.03118) and [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) models and carefully curated duplicated utterances.
This dataset aims to provide a more comprehensive and realistic representation of customer service interactions,
facilitating the development and evaluation of robust and efficient dialogue systems.
Key Features:
- Augmentation with IBM's [LAMBADA Model](https://arxiv.org/abs/1911.03118): The Clinic150-SUR dataset leverages IBM's LAMBADA model, a language generation model trained on a large corpus of text, to augment the original dataset. This augmentation process enhances the diversity and complexity of the dialogue data, allowing for a broader range of interactions.
- Integration of [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) Model: In addition to the LAMBADA model, the Clinic150-SUR dataset also incorporates the Parrot model, providing a variety of paraphrases. By integrating Parrot, the dataset achieves more variations of existing utterances.
- Duplicated Utterances: The dataset includes carefully curated duplicated utterances to mimic real-world scenarios where users rephrase or repeat commonly asked queries. This feature adds variability to the data, reflecting the natural tendencies of human interactions, and enables dialogue systems to handle such instances better.
- [Clinic150](https://aclanthology.org/D19-1131/) as the Foundation: The Clinic150-SUR dataset is built upon the Clinic150 dataset, which originally consisted of 150 in-domain intent classes and 150 human utterances for each intent. By utilizing this foundation, the augmented dataset retains the in-domain expertise while better reflecting the nature of user requests towards a dialog system.
### Data Instances
#### clinic150-SUR
- **Size of downloaded dataset file:** 29 MB
### Data Fields
#### clinic150-SUR
- `intent`: a `string` feature.
- `user_utterance`: a `string` feature.
- `origin`: a `string` feature ('original', 'lambada', 'parrot').
### Citation Information
```
@inproceedings{rabinovich2022reliable,
title={Reliable and Interpretable Drift Detection in Streams of Short Texts},
author={Rabinovich, Ella and Vetzler, Matan and Ackerman, Samuel and Anaby-Tavor, Ateret},
booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (industry track)",
publisher = "Association for Computational Linguistics",
year={2023},
url={https://arxiv.org/abs/2305.17750}
}
```
### Contributions
Thanks to [Matan Vetzler](https://www.linkedin.com/in/matanvetzler/), [Ella Rabinovich](https://www.linkedin.com/in/ella-rabinovich-7b9a06/) for adding this dataset. | 3,447 | [
[
-0.0247039794921875,
-0.061248779296875,
0.0239105224609375,
0.0211029052734375,
-0.018280029296875,
-0.004199981689453125,
-0.0088958740234375,
-0.03497314453125,
0.016143798828125,
0.06256103515625,
-0.047607421875,
-0.053619384765625,
-0.01229095458984375,
... |
lansinuote/diffusion.4.text_to_image.book | 2023-05-26T10:00:42.000Z | [
"region:us"
] | lansinuote | null | null | 2 | 3 | 2023-05-26T10:00:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 7321196.654565875
num_examples: 1000
download_size: 6528669
dataset_size: 7321196.654565875
---
# Dataset Card for "diffusion.4.text_to_image.book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 425 | [
[
-0.04315185546875,
-0.03826904296875,
0.027435302734375,
0.0205230712890625,
-0.01108551025390625,
-0.0148773193359375,
0.020172119140625,
-0.00917816162109375,
0.032562255859375,
0.0341796875,
-0.03436279296875,
-0.06341552734375,
-0.05621337890625,
-0.0242... |
d0rj/conv_ai_3_ru | 2023-05-28T11:49:49.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:conv_ai_3",
"language:ru",
"license:unknown",
"eval... | d0rj | null | null | 0 | 3 | 2023-05-28T11:30:25 | ---
annotations_creators:
- crowdsourced
language_creators:
- translated
language:
- ru
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- conv_ai_3
task_categories:
- conversational
- text-classification
task_ids:
- text-scoring
paperswithcode_id: null
pretty_name: conv_ai_3 (ru)
tags:
- evaluating-dialogue-systems
dataset_info:
features:
- name: topic_id
dtype: int32
- name: initial_request
dtype: string
- name: topic_desc
dtype: string
- name: clarification_need
dtype: int32
- name: facet_id
dtype: string
- name: facet_desc
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
config_name: conv_ai_3
splits:
- name: train
num_examples: 9176
- name: validation
num_examples: 2313
---
# Dataset Card for d0rj/conv_ai_3_ru
## Dataset Description
- **Homepage:** https://github.com/aliannejadi/ClariQ
- **Repository:** https://github.com/aliannejadi/ClariQ
- **Paper:** https://arxiv.org/abs/2009.11352
### Dataset Summary
This is translated version of [conv_ai_3](https://huggingface.co/datasets/conv_ai_3) dataset to Russian language.
### Languages
Russian (translated from English).
## Dataset Structure
### Data Fields
- `topic_id`: the ID of the topic (`initial_request`).
- `initial_request`: the query (text) that initiates the conversation.
- `topic_desc`: a full description of the topic as it appears in the TREC Web Track data.
- `clarification_need`: a label from 1 to 4, indicating how much it is needed to clarify a topic. If an `initial_request` is self-contained and would not need any clarification, the label would be 1. While if a `initial_request` is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4.
- `facet_id`: the ID of the facet.
- `facet_desc`: a full description of the facet (information need) as it appears in the TREC Web Track data.
- `question_id`: the ID of the question..
- `question`: a clarifying question that the system can pose to the user for the current topic and facet.
- `answer`: an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is `initial_request`, their information need is `facet_desc`, and `question` has been posed to the user).
### Citation Information
@misc{aliannejadi2020convai3,
title={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)},
author={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev},
year={2020},
eprint={2009.11352},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. | 2,869 | [
[
-0.0452880859375,
-0.065185546875,
0.02227783203125,
0.01508331298828125,
-0.0227203369140625,
-0.01494598388671875,
-0.021331787109375,
-0.0157012939453125,
0.005218505859375,
0.0294647216796875,
-0.06829833984375,
-0.05499267578125,
-0.0173797607421875,
0.... |
Circularmachines/batch_indexing_machine_230529_004 | 2023-05-29T11:49:23.000Z | [
"region:us"
] | Circularmachines | null | null | 0 | 3 | 2023-05-29T11:03:29 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 163377382.0
num_examples: 720
download_size: 163389369
dataset_size: 163377382.0
---
# Dataset Card for "batch_indexing_machine_230529_004"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 384 | [
[
-0.038299560546875,
-0.00899505615234375,
0.016693115234375,
0.02752685546875,
-0.01399993896484375,
-0.00795745849609375,
0.0313720703125,
0.017852783203125,
0.052215576171875,
0.0401611328125,
-0.06182861328125,
-0.037139892578125,
-0.036376953125,
-0.0026... |
TigerResearch/tigerbot-wiki-qa-bart-en-10k | 2023-10-07T05:46:02.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 0 | 3 | 2023-05-30T15:07:55 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 英文wiki类的问答数据
<p align="center" width="40%">
原始来源:[https://huggingface.co/datasets/michaelthwan/oa_wiki_qa_bart_10000row](https://huggingface.co/datasets/michaelthwan/oa_wiki_qa_bart_10000row)
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-wiki-qa-bart-en-10k')
``` | 407 | [
[
-0.035614013671875,
-0.037933349609375,
0.0038909912109375,
0.01763916015625,
-0.0202484130859375,
-0.0157623291015625,
-0.003818511962890625,
-0.0201568603515625,
0.061248779296875,
0.0225982666015625,
-0.04827880859375,
-0.03045654296875,
-0.01186370849609375,... |
TigerResearch/tigerbot-earning-plugin | 2023-06-01T10:19:33.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 1 | 3 | 2023-05-30T15:23:08 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 模型rethink时使用的外脑原始数据,财报类
- 共2500篇财报,抽取后按段落保存
- 发布时间区间为: 2022-02-28 至 2023-05-10
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-earning-plugin')
``` | 322 | [
[
-0.005603790283203125,
-0.037811279296875,
-0.0013427734375,
0.023101806640625,
-0.034454345703125,
0.00035834312438964844,
-0.01015472412109375,
-0.00630950927734375,
0.058868408203125,
0.037445068359375,
-0.05474853515625,
-0.032501220703125,
-0.02058410644531... |
GIZ/sector_data | 2023-05-31T16:03:36.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"climate",
"policy",
"region:us"
] | GIZ | null | null | 0 | 3 | 2023-05-31T08:18:52 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
tags:
- climate
- policy
---
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) for **Multi-Label Sector classification** of given text .The source dataset for this comes from [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1),
and Tracs(GIZ).
Specifications
- Dataset size: ~10k
- Average text length : 50 words
- Language: English
Sectors Included:
<pre><b>Agriculture,Buildings, Coastal Zone, Disaster Risk Management (DRM), Economy-wide, Energy, Environment, Health, Industries, LULUCF/Forestry, Social Development, Transport, Urban, Waste, Water</b> </pre>
Due to imbalanced sectors respresentation (True category), some more columns are added to signify some info.
- set0: [Agriculture,Energy,LULUCF/Forestry,Water,Environment] `count > 2000`
- set1:[Social Development,Transport,Urban,Economy-wide,Disaster Risk Management (DRM)] `2000 >count > 1000`
- set2:[Coastal Zone,Buildings,Health,Waste,Industries] `count < 1000` | 1,336 | [
[
-0.04180908203125,
-0.023101806640625,
0.026824951171875,
0.0008630752563476562,
0.0007262229919433594,
0.01029205322265625,
-0.0019474029541015625,
-0.021881103515625,
0.0303955078125,
0.04833984375,
-0.047119140625,
-0.068359375,
-0.042327880859375,
-0.001... |
rcds/swiss_leading_decision_summarization | 2023-07-20T07:38:30.000Z | [
"task_categories:summarization",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"r... | rcds | This dataset contains court decisions for the swiss ruling summarization task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 3 | 3 | 2023-05-31T08:35:26 | ---
license: cc-by-sa-4.0
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
multilinguality:
- multilingual
pretty_name: Leading Decision Summarization
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
---
# Dataset Card for Leading Decision Summarization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains text and summary for swiss leading decisions.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents|
|------------|------------|--------------------|
| German | **de** | 12K |
| French | **fr** | 5K |
| Italian | **it** | 835 |
## Dataset Structure
- decision_id: unique identifier for the decision
- header: a short header for the decision
- regeste: the summary of the leading decision
- text: the main text of the leading decision
- law_area: area of law of the decision
- law_sub_area: sub-area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
### Data Fields
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [Joel Niklaus](https://niklaus.ai) for adding this dataset.
| 5,128 | [
[
-0.025177001953125,
-0.0460205078125,
0.027801513671875,
0.028656005859375,
-0.03173828125,
-0.01035308837890625,
-0.0294952392578125,
-0.007049560546875,
0.0203094482421875,
0.0390625,
-0.0438232421875,
-0.06768798828125,
-0.062042236328125,
0.0050659179687... |
rcds/swiss_citation_extraction | 2023-08-31T12:22:28.000Z | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | rcds | This dataset contains court decision for cit ex task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 3 | 2023-06-01T08:32:26 | ---
license: cc-by-sa-4.0
task_categories:
- token-classification
language:
- de
- fr
- it
pretty_name: Swiss Citation Extraction
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Citation Extraction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Citation Extraction is a multilingual, diachronic dataset of 131K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging token classification task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 85K |
| French | **fr** | 38K |
| Italian | **it** | 8K |
## Dataset Structure
### Data Fields
```
decision_id:
considerations:
NER_labels: CITATION refers to a case citation or a reference to another court decision. LAW indicates a reference to a specific law. O is used for words or tokens that don't fall under the previous two labels. In accordance with the IOB format, each tag, apart from 'O', is accompanied by the 'B-' prefix if it marks the beginning of the span, or the 'I-' prefix if it's inside or at the end of the span.
law_area: (string)
language: (string)
year: (int64)
chamber: (string)
region: (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions | 5,148 | [
[
-0.020904541015625,
-0.05023193359375,
0.0242767333984375,
0.024688720703125,
-0.035308837890625,
-0.003925323486328125,
-0.0191497802734375,
-0.0224456787109375,
0.0179290771484375,
0.029541015625,
-0.039642333984375,
-0.0570068359375,
-0.0538330078125,
0.0... |
ltg/norec_tsa | 2023-08-10T13:59:18.000Z | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:no",
"language:nb",
"language:nn",
"license:cc-by-nc-4.0",
"region:us"
] | ltg | null | null | 0 | 3 | 2023-06-01T10:30:41 | ---
language:
- 'no'
- nb
- nn
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- token-classification
pretty_name: NoReC TSA
dataset_info:
- config_name: binary
features:
- name: idx
dtype: string
- name: tokens
sequence: string
- name: tsa_tags
sequence: string
splits:
- name: train
num_bytes: 2296476
num_examples: 8634
- name: validation
num_bytes: 411562
num_examples: 1531
- name: test
num_bytes: 346288
num_examples: 1272
download_size: 898904
dataset_size: 3054326
homepage: https://github.com/ltgoslo/norec_tsa
citation: "@InProceedings{OvrMaeBar20,\n author = {Lilja Øvrelid and Petter Mæhlum\
\ and Jeremy Barnes and Erik Velldal},\n title = {A Fine-grained Sentiment Dataset\
\ for {N}orwegian},\n booktitle = {{Proceedings of the 12th Edition of the Language\
\ Resources and Evaluation Conference}},\n year = 2020,\n address = \"Marseille,\
\ France, 2020\"\n}"
- config_name: intensity
features:
- name: idx
dtype: string
- name: tokens
sequence: string
- name: tsa_tags
sequence: string
splits:
- name: train
num_bytes: 2316306
num_examples: 8634
- name: validation
num_bytes: 414972
num_examples: 1531
- name: test
num_bytes: 349228
num_examples: 1272
download_size: 904339
dataset_size: 3080506
homepage: https://github.com/ltgoslo/norec_tsa
citation: "@InProceedings{OvrMaeBar20,\n author = {Lilja Øvrelid and Petter Mæhlum\
\ and Jeremy Barnes and Erik Velldal},\n title = {A Fine-grained Sentiment Dataset\
\ for {N}orwegian},\n booktitle = {{Proceedings of the 12th Edition of the Language\
\ Resources and Evaluation Conference}},\n year = 2020,\n address = \"Marseille,\
\ France, 2020\"\n}"
configs:
- config_name: binary
data_files:
- split: train
path: binary/train-*
- split: validation
path: binary/validation-*
- split: test
path: binary/test-*
- config_name: intensity
data_files:
- split: train
path: intensity/train-*
- split: validation
path: intensity/validation-*
- split: test
path: intensity/test-*
---
# Dataset Card for NoReC TSA
## Dataset Description
<!--- **Homepage:** --->
- **Repository:**
https://github.com/ltgoslo/norec_tsa
- **Paper:**
[A Fine-Grained Sentiment Dataset for Norwegian](https://aclanthology.org/2020.lrec-1.618/)
<!---
- **Leaderboard:**
- **Point of Contact:**
--->
### Dataset Summary
The dataset contains tokenized Norwegian sentences where each token is tagged for sentiment expressed towards that token. The dataset is derived from the manually annotated [NoReC_fine](https://github.com/ltgoslo/norec_fine) with rich annotations for each sentiment expression in the texts.
The texts are a subset of the Norewegian Review Corpus [NoReC](https://github.com/ltgoslo/norec).
### Supported Tasks and Leaderboards
[NorBench](https://github.com/ltgoslo/norbench) provides TSA evaluation scripts using this dataset, and a leaderboard comparing large language models for downstream NLP tasks in Norwegian.
### Languages
Norwegian: Predominantly Bokmål written variant.
| variant | split | sents | docs |
|:-----|:--------|--------:|-------:|
| nb | dev | 1531 | 44 |
| nb | test | 1272 | 47 |
| nb | train | 8556 | 323 |
| nn | train | 78 | 4 |
## Dataset Structure
The dataset comes in two flavours:
- `binary` configuration yields labels with binary Positive / Negative sentiment description
- `intensity` configuration yields labels with additional sentiment intensity, 1: Slight, 2: Standard, and 3: Strong.
The config is mandatory when loading the dataset and can be passed as a second positional parameter, e.g. `tsa_data = load_dataset("ltg/norec_tsa", "binary")`
The dataset comes with predefined train, dev (vallidation) and test splits.
### Data Instances
Config "binary" example instance:
```
{'idx': '701363-08-02',
'tokens': ['Vi', 'liker', 'det', '.'],
'tsa_tags': ['O', 'O', 'B-targ-Positive', 'O']}
```
Config "intensity" example instance:
```
{'idx': '701363-08-02',
'tokens': ['Vi', 'liker', 'det', '.'],
'tsa_tags': ['O', 'O', 'B-targ-Positive-2', 'O']}
```
### Data Fields
- idx(str): Unique document-and sentence identifier from [NoReC_fine](https://github.com/ltgoslo/norec_fine). The 6-digit document identifier can also be used to look up the text and its metadata in [NoReC](https://github.com/ltgoslo/norec).
- tokens: (List[str]): List of the tokens in the sentence
- tsa_tags: (List[str]): List of the tags for each token in BIO format. There is no integer representation of these in the dataset.
### Data Splits
```
DatasetDict({
test: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 1272
})
train: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 8634
})
validation: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 1531
})
})
```
## Dataset Creation
### Curation Rationale
The sentiment expressions and targets are annotated in NoReC_fine according to its [annotation guidelines](https://github.com/ltgoslo/norec_fine/blob/master/annotation_guidelines/guidelines.md)
Since a sentiment target may be the target of several sentiment expressions, these are resolved to a final sentiment polarity (and intensity) using the conversion script in [NoReC_tsa](https://github.com/ltgoslo/norec_tsa). There is no "mixed" sentiment category. When a target is the receiver of both positive and negative sentiment, the strongest wins. If a tie, the last sentiment wins.
### Source Data
A subset of the Norwegian Review Corpus with its sources and preprocessing described [here](https://github.com/ltgoslo/norec).
<!---
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
--->
### Discussion of Biases
The professional review texts in NoReC that NoReC_tsa is a subset from, are from a set number of Norwegian Publishing channels and from a set timespan that can be explored in the [NoReC metadata](https://raw.githubusercontent.com/ltgoslo/norec/master/data/metadata.json). Both language usage and sentiments expressed could have been more diverse with a more diverse set of source texts.
<!---
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
--->
### Licensing Information
The data, being derived from [NoReC](https://github.com/ltgoslo/norec), is distributed under a Creative Commons Attribution-NonCommercial licence (CC BY-NC 4.0), access the full license text here: https://creativecommons.org/licenses/by-nc/4.0/
The licence is motivated by the need to block the possibility of third parties redistributing the orignal reviews for commercial purposes. Note that machine learned models, extracted lexicons, embeddings, and similar resources that are created on the basis of NoReC are not considered to contain the original data and so can be freely used also for commercial purposes despite the non-commercial condition.
### Citation Information
```
@InProceedings{OvrMaeBar20,
author = {Lilja Øvrelid and Petter Mæhlum and Jeremy Barnes and Erik Velldal},
title = {A Fine-grained Sentiment Dataset for {N}orwegian},
booktitle = {{Proceedings of the 12th Edition of the Language Resources and Evaluation Conference}},
year = 2020,
address = {Marseille, France, 2020}
}```
<!---
### Contributions
[More Information Needed]
---> | 7,926 | [
[
-0.03228759765625,
-0.0350341796875,
0.0120849609375,
0.010772705078125,
-0.0382080078125,
-0.00753021240234375,
-0.0280914306640625,
-0.031646728515625,
0.037384033203125,
0.03692626953125,
-0.049407958984375,
-0.08538818359375,
-0.036285400390625,
0.019668... |
tasksource/zero-shot-label-nli | 2023-06-23T14:48:53.000Z | [
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:other",
"region:us"
] | tasksource | null | null | 4 | 3 | 2023-06-02T11:33:57 | ---
license: other
task_categories:
- zero-shot-classification
- text-classification
task_ids:
- natural-language-inference
language:
- en
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 551417533
num_examples: 1090333
- name: validation
num_bytes: 10825569
num_examples: 14419
- name: test
num_bytes: 9738922
num_examples: 14680
download_size: 302498339
dataset_size: 571982024
---
[tasksource](https://github.com/sileod/tasksource) classification tasks recasted as natural language inference.
This dataset is intended to improve label understanding in [zero-shot classification HF pipelines](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.ZeroShotClassificationPipeline
).
Inputs that are text pairs are separated by a newline (\n).
```python
from transformers import pipeline
classifier = pipeline(model="sileod/deberta-v3-base-tasksource-nli")
classifier(
"I have a problem with my iphone that needs to be resolved asap!!",
candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
)
```
[deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) now includes `label-nli` in its training mix (a relatively small portion, to keep the model general, but note that nli models work for label-like zero shot classification without specific supervision (https://aclanthology.org/D19-1404.pdf).
```
@article{sileo2023tasksource,
title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation},
author={Sileo, Damien},
year={2023}
}
``` | 1,879 | [
[
-0.0098419189453125,
-0.033721923828125,
0.024566650390625,
0.0220947265625,
0.0025615692138671875,
-0.005123138427734375,
-0.0151824951171875,
-0.037841796875,
0.006755828857421875,
0.045501708984375,
-0.057373046875,
-0.037445068359375,
-0.049102783203125,
... |
sam-mosaic/wizard_vicuna_unfiltered_chatml | 2023-07-18T00:28:10.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | 1 | 3 | 2023-06-03T16:17:06 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 208337670.38355604
num_examples: 87708
- name: test
num_bytes: 712606.6164439596
num_examples: 300
download_size: 101987390
dataset_size: 209050277.0
---
# Dataset Card for "wizard_vicuna_unfiltered_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.0416259765625,
-0.0302734375,
0.00434112548828125,
0.0169830322265625,
-0.0210113525390625,
-0.0024509429931640625,
0.00434112548828125,
0.002887725830078125,
0.044097900390625,
0.06964111328125,
-0.0643310546875,
-0.06329345703125,
-0.036712646484375,
-0... |
lewtun/sharegpt_prompts_annotated | 2023-06-04T20:12:37.000Z | [
"region:us"
] | lewtun | null | null | 0 | 3 | 2023-06-04T20:10:16 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: index
dtype: int64
- name: annotator
dtype: string
- name: timestamp
dtype: timestamp[ns]
- name: rating
dtype: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 669141
num_examples: 1096
- name: no_code
num_bytes: 669141
num_examples: 1096
download_size: 415342
dataset_size: 1338282
---
# Dataset Card for "sharegpt_prompts_annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 656 | [
[
-0.05645751953125,
-0.0292205810546875,
0.016815185546875,
0.0247802734375,
-0.0181427001953125,
-0.006988525390625,
0.010589599609375,
-0.006008148193359375,
0.05181884765625,
0.01800537109375,
-0.07415771484375,
-0.048919677734375,
-0.051483154296875,
-0.0... |
codeparrot/conala-mined-curated | 2023-06-13T15:56:31.000Z | [
"doi:10.57967/hf/0755",
"region:us"
] | codeparrot | null | null | 6 | 3 | 2023-06-05T08:02:12 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: parent_answer_post_id
dtype: int64
- name: prob
dtype: float64
- name: snippet
dtype: string
- name: intent
dtype: string
- name: rewritten_intent
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 136332874
num_examples: 593891
download_size: 94688053
dataset_size: 136332874
---
# Conala-mined-curated
Conala-mined-curatedd is a dataset that is based on the mined subset of the [CoNaLa dataset](https://huggingface.co/datasets/neulab/conala/viewer/mined/train).
conala is a dataset crawled from Stack Overflow. Part of it is filtered and curated to from a training set and a test set. However, the mined part is not comparably
post-processed. It is a set of 600K examples that we decided to work on.
## Dataset description
The conala datasets have 3 columns of interest. We give their description as provided by the [authors](https://conala-corpus.github.io)
- *intent* : Natural Language intent (i.e., the title of a Stack Overflow question)
- *snippet* : A code snippet that implements the intent. This is the output of systems in the challenge.
- *rewritten_intent* : Crowdsourced revised intents that try to better reflect the full meaning of the code, typically done by incorporating variable names and
- function arguments that appeared in the code into the intent. This is the input to be used by systems in the CoNaLa challenge.
For instruction fine-tuning, we would be interested to train a model to map the *rewritten_intent* to the *snippet*. However, the mined subset does not have the
column *rewritten_intent*. *intent* is to vague to be describe as an instruction so we have to find a way to build the column *rewritten_intent* for the mined subset.
That is exactly what was done in order to build this dataset.
## Method
The most valuable information that we have in order to recover the column *rewritten_intent* are the columns *intent* and *snippet*. Fortunately we also have the training set and the test set
of conala which are labeled. It means that we have a view of what a high quality triplet (*intent*, *rewritten_intent*, *snippet*) look like. We had the idea to build a Seq2Seq model whose role
would be to reconstruct the *rewritten_intent* based on the concatenation [*intent*, *snippet*].
More precisely, we fine-tuned [google UL2](https://huggingface.co/google/ul2) to solve this task.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("codeparrot/conala-mined-curated")
dataset
DatasetDict({
train: Dataset({
features: ['question_id', 'parent_answer_post_id', 'prob', 'snippet', 'intent', 'rewritten_intent', 'id'],
num_rows: 593891
})
})
```
## Additional resources
- Official site of the [CoNala-corpus](https://conala-corpus.github.io).
- [CoNaLa's card](https://huggingface.co/datasets/neulab/conala).
- [Github repository](https://github.com/ArmelRandy/Conala) of our method.
| 3,026 | [
[
-0.01161956787109375,
-0.06927490234375,
0.012298583984375,
-0.00852203369140625,
-0.0193634033203125,
-0.0088653564453125,
-0.0023174285888671875,
-0.01416778564453125,
0.0214080810546875,
0.043670654296875,
-0.048248291015625,
-0.039825439453125,
-0.0387268066... |
d0rj/rlhf-reward-datasets-ru | 2023-06-05T14:51:19.000Z | [
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:yitingxie/rlhf-reward-datasets",
"language:ru",
"license:mit",
"human-feedback",
"ChatGPT",
"reward",
"region:us"
] | d0rj | null | null | 1 | 3 | 2023-06-05T14:47:13 | ---
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
pretty_name: HH for RLHF (ru)
source_datasets:
- yitingxie/rlhf-reward-datasets
license: mit
tags:
- human-feedback
- ChatGPT
- reward
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 151564655.0
num_examples: 76256
- name: test
num_bytes: 6093563.0
num_examples: 5103
download_size: 78860063
dataset_size: 157658218.0
---
# Dataset Card for "rlhf-reward-datasets-ru"
This is translated version of [yitingxie/rlhf-reward-datasets dataset](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets) into Russian.
| 779 | [
[
0.0064697265625,
-0.0276641845703125,
-0.0097198486328125,
0.0213470458984375,
-0.05859375,
0.002960205078125,
0.0152130126953125,
-0.0224761962890625,
0.057647705078125,
0.035491943359375,
-0.07708740234375,
-0.051025390625,
-0.0458984375,
-0.00340843200683... |
dinhanhx/coco-2017-vi | 2023-06-06T13:59:18.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"language:vi",
"language:en",
"license:unknown",
"coco",
"coco-2017-vi",
"Vietnamese",
"arxiv:2002.00175",
"region:us"
] | dinhanhx | null | null | 0 | 3 | 2023-06-06T11:02:46 | ---
language:
- vi
- en
pretty_name: COCO 2017 image captions in Vietnamese
source-datasets:
- ms coco
tags:
- coco
- coco-2017-vi
- Vietnamese
license: unknown
task_categories:
- image-to-text
task_ids:
- image-captioning
---
# COCO 2017 image captions in Vietnamese
The dataset is firstly introduced in [dinhanhx/VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/tree/main). I use VinAI tools to translate [COCO 2027 image caption](https://cocodataset.org/#download) (2017 Train/Val annotations) from English to Vietnamese. Then we merge [UIT-ViIC](https://arxiv.org/abs/2002.00175) dataset into it. To load the dataset, one can take a look at [this code in VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/blob/main/src/data.py#L22-L100).
I provide both English original and Vietnamese version (including UIT-ViIC).
⚠ Note:
- UIT-ViIC splits are originated from `en/captions_train2017.json`. Therefore, I combine all UIT-ViIC splits then I merge into `vi/captions_train2017_trans.json`. As a result, I get `captions_train2017_trans_plus.json`.
- `vi/captions_train2017_trans.json` and `vi/captions_val2017_trans.json` are VinAI-translated from the ones in `en/`. | 1,200 | [
[
-0.0272369384765625,
-0.031341552734375,
0.0277557373046875,
0.037689208984375,
-0.061553955078125,
0.0171661376953125,
0.004756927490234375,
-0.048797607421875,
0.0287933349609375,
0.054779052734375,
-0.035186767578125,
-0.0361328125,
-0.0345458984375,
0.02... |
saldra/sakura_japanese_dataset | 2023-06-08T11:31:06.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:ja",
"license:other",
"region:us"
] | saldra | null | null | 8 | 3 | 2023-06-07T05:44:23 | ---
license: other
task_categories:
- question-answering
language:
- ja
pretty_name: sakura_japanese_dataset
size_categories:
- n<1K
---
# Sakura_dataset
商用利用可能な超小規模高品質日本語データセット。
categoryは以下
- commonsense_qa: 常識問題
- Calc-ape210k: 数学問題
- japanese-commonsense-openqa: 日本の常識問題(自作)
下記データセットを使用しています。
- [commonsense_qa](https://huggingface.co/datasets/commonsense_qa)
- [MU-NLPC/Calc-ape210k](https://huggingface.co/datasets/MU-NLPC/Calc-ape210k)
## LICENSE
This dataset is licensed under Database Contents License (DbCL) v1.0
## Update
Last Update : 2023-06-07
## Example Code
```
# モデルの読み込み
import os
from peft.utils.config import TaskType
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import peft
import transformers
import datasets
# 基本パラメータ
model_name = "rinna/japanese-gpt-neox-3.6b"
dataset = "saldra/sakura_japanese_dataset"
is_dataset_local = False
peft_name = "lora-rinna-3.6b-sakura_dataset"
output_dir = "lora-rinna-3.6b-sakura_dataset-results"
# トレーニング用パラメータ
eval_steps = 50 #200
save_steps = 400 #200
logging_steps = 400 #20
max_steps = 400 # dollyだと 4881
# データセットの準備
data = datasets.load_dataset(dataset)
CUTOFF_LEN = 512 # コンテキスト長の上限
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
load_in_8bit=True,
)
model.enable_input_require_grads()
model.gradient_checkpointing_enable()
config = peft.LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.01,
inference_mode=False,
task_type=TaskType.CAUSAL_LM,
)
model = peft.get_peft_model(model, config)
# トークナイズ
def tokenize(prompt, tokenizer):
result = tokenizer(
prompt,
truncation=True,
max_length=CUTOFF_LEN,
padding=False,
)
return {
"input_ids": result["input_ids"],
"attention_mask": result["attention_mask"],
}
# プロンプトテンプレートの準備
def generate_prompt(data_point):
result = f'### 指示:\n{data_point["instruction"]}\n\n### 回答:\n{data_point["output"]}'
# rinna/japanese-gpt-neox-3.6Bの場合、改行コードを<NL>に変換する必要がある
result = result.replace('\n', '<NL>')
return result
VAL_SET_SIZE = 0.1 # 検証データの比率(float)
# 学習データと検証データの準備
train_val = data["train"].train_test_split(
test_size=VAL_SET_SIZE, shuffle=True, seed=42
)
train_data = train_val["train"]
train_data = train_data.shuffle().map(lambda x: tokenize(generate_prompt(x), tokenizer))
val_data = train_val["test"]
val_data = val_data.shuffle().map(lambda x: tokenize(generate_prompt(x), tokenizer))
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
num_train_epochs=3,
learning_rate=3e-4,
logging_steps=logging_steps,
evaluation_strategy="steps",
save_strategy="steps",
max_steps=max_steps,
eval_steps=eval_steps,
save_steps=save_steps,
output_dir=output_dir,
report_to="none",
save_total_limit=3,
push_to_hub=False,
auto_find_batch_size=True
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
# LoRAモデルの保存
trainer.model.save_pretrained(peft_name)
print("Done!")
``` | 3,300 | [
[
-0.033935546875,
-0.057769775390625,
0.01190948486328125,
0.028778076171875,
-0.0172119140625,
-0.0184783935546875,
-0.010406494140625,
-0.00884246826171875,
0.0223388671875,
0.025482177734375,
-0.060302734375,
-0.037567138671875,
-0.0360107421875,
0.0089874... |
eastwind/semeval-2016-absa-reviews-arabic | 2023-06-07T13:09:16.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ar",
"license:mit",
"region:us"
] | eastwind | null | null | 0 | 3 | 2023-06-07T12:22:40 | ---
license: mit
task_categories:
- text-classification
language:
- ar
pretty_name: SemEval 2016 Aspect Based Sentiment Analysis on Hotel Reviews
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/msmadi/ABSA-Hotels/tree/master
### Dataset Summary
Aspect based sentiment analysis dataset using hotel reviews in Arabic.
### Languages
Arabic
### Licensing Information
Original dataset was licensed under MIT, so this is also under MIT
### Citation Information
Cite this and the original authors if you want to.
| 594 | [
[
-0.030426025390625,
-0.016143798828125,
0.00899505615234375,
0.01009368896484375,
-0.041015625,
0.01441192626953125,
0.005397796630859375,
-0.00759124755859375,
0.028900146484375,
0.048065185546875,
-0.045166015625,
-0.07470703125,
-0.02764892578125,
0.00573... |
Yova/templama | 2023-06-15T10:28:42.000Z | [
"region:us"
] | Yova | null | null | 0 | 3 | 2023-06-07T14:55:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
coding-assistant-custom/mini-code-corpus | 2023-06-08T02:05:04.000Z | [
"region:us"
] | coding-assistant-custom | null | null | 1 | 3 | 2023-06-08T02:04:59 | ---
dataset_info:
features:
- name: reponame
dtype: string
- name: filepath
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 475641
num_examples: 139
download_size: 151005
dataset_size: 475641
---
# Dataset Card for "mini-code-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 431 | [
[
-0.049163818359375,
-0.0227813720703125,
0.00974273681640625,
0.007781982421875,
-0.0066070556640625,
0.0110321044921875,
0.0006189346313476562,
-0.007843017578125,
0.0692138671875,
0.0235443115234375,
-0.0433349609375,
-0.055938720703125,
-0.035491943359375,
... |
tasksource/scone | 2023-06-08T08:58:32.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"license:cc0-1.0",
"arxiv:2305.19426",
"region:us"
] | tasksource | null | null | 0 | 3 | 2023-06-08T07:22:53 | ---
license: cc0-1.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
dataset_info:
features:
- name: sentence1_edited
dtype: string
- name: sentence2_edited
dtype: string
- name: gold_label_edited
dtype: string
splits:
- name: train
num_bytes: 694572
num_examples: 5010
- name: test
num_bytes: 149006
num_examples: 1000
download_size: 114079
dataset_size: 843578
---
https://github.com/selenashe/ScoNe
NLI subset, original part (excluding one-scope)
```
@misc{she2023scone,
title={ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning},
author={Jingyuan Selena She and Christopher Potts and Samuel R. Bowman and Atticus Geiger},
year={2023},
eprint={2305.19426},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 871 | [
[
-0.009613037109375,
-0.037139892578125,
0.0458984375,
0.0037250518798828125,
-0.00907135009765625,
-0.020263671875,
-0.0175018310546875,
-0.0301666259765625,
0.025909423828125,
0.044036865234375,
-0.0556640625,
-0.046783447265625,
-0.035614013671875,
0.01257... |
tasksource/monli | 2023-06-08T07:30:54.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"region:us"
] | tasksource | null | null | 0 | 3 | 2023-06-08T07:28:30 | ---
task_categories:
- text-classification
language:
- en
task_ids:
- natural-language-inference
---
https://github.com/atticusg/MoNLI
```
@inproceedings{geiger-etal-2020-neural,
address = {Online},
author = {Geiger, Atticus and Richardson, Kyle and Potts, Christopher},
booktitle = {Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP},
doi = {10.18653/v1/2020.blackboxnlp-1.16},
month = nov,
pages = {163--173},
publisher = {Association for Computational Linguistics},
title = {Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation},
url = {https://www.aclweb.org/anthology/2020.blackboxnlp-1.16},
year = {2020}}
``` | 723 | [
[
-0.0297698974609375,
-0.0732421875,
0.03631591796875,
-0.0004584789276123047,
-0.0013570785522460938,
-0.001255035400390625,
-0.0115509033203125,
-0.06500244140625,
0.0273895263671875,
0.0211334228515625,
-0.0313720703125,
-0.03851318359375,
-0.026519775390625,
... |
davanstrien/on_the_books_example | 2023-06-08T13:41:08.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-3.0",
"lam",
"legal",
"region:us"
] | davanstrien | null | null | 0 | 3 | 2023-06-08T13:40:24 | ---
license: cc-by-3.0
task_categories:
- text-classification
language:
- en
tags:
- lam
- legal
pretty_name: On the Books training data
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,672 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
rlacombe/ICCS | 2023-06-11T16:54:10.000Z | [
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"climate",
"region:us"
] | rlacombe | null | null | 1 | 3 | 2023-06-09T21:36:06 | ---
license: mit
task_categories:
- zero-shot-classification
- text-classification
- feature-extraction
language:
- en
tags:
- climate
pretty_name: ICCS (IPCC Confidence in Climate Statements)
size_categories:
- 1K<n<10K
---
# IPCC Confidence in Climate Statements
_What do LLMs know about climate? Let's find out!_
## ICCS Dataset
We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively.
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
## Confidence Labels
The authors of the United Nations International Panel on Climate Change (IPCC) reports have developed a structured framework to communicate the confidence and uncertainty levels of statements regarding our knowledge of climate change ([Mastrandrea, 2010](https://link.springer.com/article/10.1007/s10584-011-0178-6)).
Our dataset leverages this distinctive and consistent approach to labelling uncertainty across topics, disciplines, and report chapters, to help NLP and climate communication researchers evaluate how well LLMs can assess human expert confidence in a set of climate science statements from the IPCC reports.

Source: [IPCC AR6 Working Group I report](https://www.ipcc.ch/report/ar6/wg1/)
## Dataset Construction
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ICCS dataset contains 8094 labeled sentences.
From the full 8094 labeled sentences, we further selected **300 statements to form a smaller and more tractable test dataset**. We performed a random selection of sentences within each report and confidence category, with the following objectives:
- Making the test set distribution representative of the confidence class distribution in the overall train set and within each report;
- Making the breakdown between source reports representative of the number of statements from each report;
- Making sure the test set contains at least 5 sentences from each class and from each source, to ensure our results are statistically robust.
Then, we manually reviewed and cleaned each sentence in the test set to provide for a fairer assessment of model capacity.
- We removed 26 extraneous references to figures, call-outs, boxes, footnotes, or subscript typos (`CO 2');
- We split 19 compound statements with conflicting confidence sub-labels, and removed 6 extraneous mid-sentence labels of the same category as the end-of-sentence label;
- We added light context to 23 sentences, and replaced 5 sentences by others when they were meaningless outside of a longer paragraph;
- We removed qualifiers at the beginning of 29 sentences to avoid biasing classification (e.g. 'But...', 'In summary...', 'However...').
**The remaining 7794 sentences not allocated to the test split form our train split.**
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
## Code Download
The code to reproduce dataset collection and our LLM benchmarking experiments is [released on GitHub](https://github.com/rlacombe/Climate-LLMs).
## Paper
We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset.
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ICCS dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge.
Pre-print upcomping. | 5,681 | [
[
-0.03680419921875,
-0.01262664794921875,
0.0400390625,
0.0096435546875,
-0.0190887451171875,
0.011962890625,
-0.011993408203125,
-0.0333251953125,
-0.0095062255859375,
0.0168914794921875,
-0.0165252685546875,
-0.0592041015625,
-0.060516357421875,
0.021331787... |
tasksource/apt | 2023-08-10T13:42:21.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"language:en",
"license:unknown",
"region:us"
] | tasksource | null | null | 0 | 3 | 2023-06-10T19:01:04 | ---
task_categories:
- text-classification
language:
- en
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
license: unknown
---
https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt
```
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
doi = "10.18653/v1/2021.acl-long.552",
pages = "7106--7116",
}
``` | 921 | [
[
-0.026641845703125,
-0.051605224609375,
0.04296875,
0.0010480880737304688,
-0.030517578125,
0.01554107666015625,
-0.01464080810546875,
-0.0238494873046875,
0.007305145263671875,
0.041259765625,
-0.0288543701171875,
-0.0452880859375,
-0.05078125,
0.0308380126... |
shellypeng/cartoon-captioned-datasets-salesforce-blip | 2023-06-13T06:35:39.000Z | [
"code",
"region:us"
] | shellypeng | null | null | 0 | 3 | 2023-06-11T02:16:56 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2357028047.718
num_examples: 1907
download_size: 1774680464
dataset_size: 2357028047.718
tags:
- code
---
# Dataset Card for "cartoon-captioned-datasets-salesforce-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.0259552001953125,
-0.00884246826171875,
-0.00922393798828125,
0.0494384765625,
-0.026641845703125,
0.029144287109375,
0.0087127685546875,
-0.004070281982421875,
0.06658935546875,
0.054931640625,
-0.0662841796875,
-0.036834716796875,
-0.04046630859375,
-0.... |
AlekseyScorpi/docs_on_several_languages | 2023-09-16T07:01:24.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"code",
"region:us"
] | AlekseyScorpi | null | null | 0 | 3 | 2023-06-11T13:50:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': az
'1': by
'2': cn
'3': en
'4': es
'5': fn
'6': gr
'7': jp
'8': ko
'9': kz
'10': la
'11': li
'12': mo
'13': 'no'
'14': pl
'15': ru
'16': ua
splits:
- name: train
num_bytes: 1893804579.79
num_examples: 1987
- name: test
num_bytes: 374568135
num_examples: 339
download_size: 2423302965
dataset_size: 2268372714.79
task_categories:
- text-classification
tags:
- code
size_categories:
- 1K<n<10K
---
# Dataset Card for "docs_on_several_languages"
This dataset is a collection of different images in different languages.
The set includes the following languages: Azerbaijani, Belorussian, Chinese, English, Estonian, Finnish, Georgian, Japanese, Korean, Kazakh, Latvian, Lithuanian, Mongolian, Norwegian, Polish, Russian, Ukranian.
Each language has a corresponding class label defined. At least 100 images in the entire dataset are allocated per class. This dataset was originally used for the task of classifying the language of a document based on its image, but I hope it can help you in other machine learning tasks.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,474 | [
[
-0.039093017578125,
-0.0260009765625,
0.01320648193359375,
0.004589080810546875,
-0.018402099609375,
0.0127105712890625,
-0.02618408203125,
-0.027130126953125,
0.01393890380859375,
0.03564453125,
-0.03594970703125,
-0.060791015625,
-0.05584716796875,
0.02580... |
truehealth/medqa | 2023-06-12T11:22:24.000Z | [
"region:us"
] | truehealth | null | null | 1 | 3 | 2023-06-12T11:18:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arbml/Ashaar_diacritized | 2023-06-13T04:05:52.000Z | [
"region:us"
] | arbml | null | null | 0 | 3 | 2023-06-13T04:05:39 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1307555.018811609
num_examples: 23481
- name: test
num_bytes: 72669.7883203079
num_examples: 1305
- name: valid
num_bytes: 72669.7883203079
num_examples: 1305
download_size: 6698907
dataset_size: 1452894.5954522246
---
# Dataset Card for "Ashaar_diacritized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 523 | [
[
-0.041748046875,
-0.0286102294921875,
-0.00304412841796875,
0.018707275390625,
-0.0238037109375,
0.006702423095703125,
0.02685546875,
-0.02105712890625,
0.05950927734375,
0.036773681640625,
-0.031280517578125,
-0.06048583984375,
-0.050445556640625,
-0.001257... |
shhossain/webnovels | 2023-06-15T15:35:51.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | shhossain | null | null | 1 | 3 | 2023-06-13T05:06:28 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
- feature-extraction
language:
- en
pretty_name: 'Novelupdates Dataset'
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: novel_id
dtype: int64
- name: url
dtype: string
- name: title
dtype: string
- name: associated_names
sequence: string
- name: img_url
dtype: string
- name: showtype
dtype: string
- name: genres
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: related_series
struct:
- name: related_series
list:
- name: title
dtype: string
- name: url
dtype: string
- name: total
dtype: int64
- name: recommendations
struct:
- name: recomendations
list:
- name: recommended_user_count
dtype: int64
- name: title
dtype: string
- name: url
dtype: string
- name: total
dtype: int64
- name: recommendation_lists
struct:
- name: list
list:
- name: title
dtype: string
- name: url
dtype: string
- name: total
dtype: int64
- name: rating
dtype: string
- name: language
dtype: string
- name: authors
sequence: string
- name: artists
sequence: string
- name: year
dtype: string
- name: status_coo
dtype: string
- name: licensed
dtype: string
- name: translated
dtype: string
- name: publishers
sequence: string
- name: en_pubs
sequence: string
- name: release_frequency
dtype: string
- name: weekly_rank
dtype: string
- name: monthly_rank
dtype: string
- name: all_time_rank
dtype: string
- name: monthly_rank_reading_list
dtype: string
- name: all_time_rank_reading_list
dtype: string
- name: total_reading_list_rank
dtype: string
- name: chapters
struct:
- name: chapters
list:
- name: title
dtype: string
- name: url
dtype: string
- name: total
dtype: int64
splits:
- name: train
num_bytes: 58948539.85115204
num_examples: 11770
- name: test
num_bytes: 14739639.148847958
num_examples: 2943
download_size: 22367283
dataset_size: 73688179.0
---
# Dataset Card for Novelupdates Webnovels
### Dataset Summary
This dataset contains information about webnovels from Novelupdates, a popular webnovel platform. It includes details such as novel ID, URL, title, associated names, cover image URL, show type, genres, tags, description, related series, recommendations, recommendation lists, rating, language, authors, artists, year, status, licensing information, translation status, publishers, release frequency, rankings, total reading list rank, and chapters.
### Supported Tasks and Leaderboards
The dataset can be used for various tasks such as text classification, zero-shot classification, and feature extraction. It currently does not have an established leaderboard.
### Languages
The dataset is primarily in English.
## Dataset Structure
### Data Instances
The dataset contains 14,713 data instances.
### Data Fields
The dataset includes the following fields:
- novel_id: integer
- url: string
- title: string
- associated_names: list of strings
- img_url: string
- showtype: string
- genres: list of strings
- tags: list of strings
- description: string
- related_series: struct
- related_series: list of structs
- title: string
- url: string
- total: integer
- recommendations: struct
- recommendations: list of structs
- recommended_user_count: integer
- title: string
- url: string
- total: integer
- recommendation_lists: struct
- list: list of structs
- title: string
- url: string
- total: integer
- rating: string
- language: string
- authors: list of strings
- artists: list of strings
- year: string
- status_coo: string
- licensed: string
- translated: string
- publishers: list of strings
- en_pubs: list of strings
- release_frequency: string
- weekly_rank: string
- monthly_rank: string
- all_time_rank: string
- monthly_rank_reading_list: string
- all_time_rank_reading_list: string
- total_reading_list_rank: string
- chapters: struct
- chapters: list of structs
- title: string
- url: string
- total: integer
### Data Splits
The dataset includes a single split:
- Train: 11.8K examples
- Test: 2.94K examples
## Dataset Creation
### Curation Rationale
The dataset was curated to provide a comprehensive collection of webnovel information from Novelupdates for various text analysis tasks.
### Source Data
#### Initial Data Collection and Normalization
The initial data was collected from the Novelupdates website and normalized for consistency and structure.
#### Who are the source language producers?
The source language producers are the authors and publishers of the webnovels.
### Annotations
#### Annotation process
The dataset does not contain explicit annotations. It consists of the information available on the Novelupdates website.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The dataset does not include any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 5,666 | [
[
-0.00998687744140625,
-0.006412506103515625,
-0.0010309219360351562,
0.0217437744140625,
-0.025054931640625,
-0.01187896728515625,
-0.01390838623046875,
-0.006465911865234375,
0.0106201171875,
0.029052734375,
-0.04925537109375,
-0.07928466796875,
-0.018478393554... |
robinhad/databricks-dolly-15k-uk | 2023-06-13T05:43:34.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:uk",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] | robinhad | null | null | 4 | 3 | 2023-06-13T05:36:45 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- uk
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k-uk` is an open source dataset based on [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) instruction-following dataset, but machine translated using [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) model.
Tasks covered include brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
Expect this dataset to not be grammatically correct and having obvious pitfalls of machine translation.
<details>
<summary>Original Summary</summary>
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Ukrainian
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors.
</details> | 8,303 | [
[
-0.029754638671875,
-0.08489990234375,
0.01459503173828125,
0.0190277099609375,
-0.009674072265625,
-0.0057830810546875,
-0.021575927734375,
-0.00982666015625,
-0.0014476776123046875,
0.039520263671875,
-0.05450439453125,
-0.048614501953125,
-0.0214996337890625,... |
ashnrk/cifar10_lt_r10_text | 2023-06-14T06:10:37.000Z | [
"region:us"
] | ashnrk | null | null | 0 | 3 | 2023-06-14T06:10:34 | ---
dataset_info:
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
- name: text_label
dtype: string
splits:
- name: train
num_bytes: 9133039.5
num_examples: 4084
download_size: 9126904
dataset_size: 9133039.5
---
# Dataset Card for "cifar10_lt_r10_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 673 | [
[
-0.0433349609375,
-0.0202789306640625,
0.015167236328125,
0.0307769775390625,
-0.01271820068359375,
0.009429931640625,
0.0046539306640625,
-0.025146484375,
0.060150146484375,
0.031890869140625,
-0.044952392578125,
-0.044891357421875,
-0.043853759765625,
-0.0... |
yyu/nyt-attrprompt | 2023-09-13T20:55:46.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2306.15895",
"region:us"
] | yyu | null | null | 0 | 3 | 2023-06-14T07:04:17 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: d
size_categories:
- 10K<n<100K
---
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
Checkout the paper https://arxiv.org/abs/2306.15895 for details.
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
Please check our original paper for details. Moreover, we provide the generated dataset using LLM as follows:
- `regen.jsonl`: The training data generated by [ReGen](https://github.com/yueyu1030/ReGen).
- `regen_llm_augmented.jsonl`: The training data generated by ReGen, with the subtopics generated by the LLM.
- `progen.jsonl`: The training data generated by [ProGen](https://github.com/hkunlp/progen).
Please cite the original paper if you use this dataset for your study. Thanks!
```
@inproceedings{meng2019weakly,
title={Weakly-supervised hierarchical text classification},
author={Meng, Yu and Shen, Jiaming and Zhang, Chao and Han, Jiawei},
booktitle={Proceedings of the AAAI conference on artificial intelligence},
pages={6826--6833},
year={2019}
}
@article{yu2023large,
title={Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias},
author={Yu, Yue and Zhuang, Yuchen and Zhang, Jieyu and Meng, Yu and Ratner, Alexander and Krishna, Ranjay and Shen, Jiaming and Zhang, Chao},
journal={arXiv preprint arXiv:2306.15895},
year={2023}
}
``` | 1,785 | [
[
-0.0008606910705566406,
-0.043853759765625,
0.023345947265625,
0.003345489501953125,
-0.0105743408203125,
-0.002960205078125,
-0.0264129638671875,
-0.01898193359375,
0.0026683807373046875,
0.0285797119140625,
-0.058563232421875,
-0.03228759765625,
-0.03607177734... |
irodkin/celeba_with_llava_captions | 2023-07-12T15:14:57.000Z | [
"language:en",
"region:us"
] | irodkin | null | null | 0 | 3 | 2023-06-14T13:44:54 | ---
language: en
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 576196360.392
num_examples: 36646
download_size: 257039500
dataset_size: 576196360.392
---
# Dataset Card for "celeba_with_llava_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.0252838134765625,
-0.0159454345703125,
0.0153045654296875,
0.020599365234375,
-0.026885986328125,
0.0200042724609375,
0.0004451274871826172,
-0.01049041748046875,
0.06842041015625,
0.043731689453125,
-0.048858642578125,
-0.04742431640625,
-0.05133056640625,
... |
sdmattpotter/pandassdcctest | 2023-06-15T17:04:24.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"politics",
"local government",
"region:us"
] | sdmattpotter | null | null | 0 | 3 | 2023-06-14T23:44:47 | ---
dataset_info:
features:
- name: ITEMNO.
dtype: string
- name: O
dtype: string
- name: '00000'
dtype: float64
- name: Motion/Second
dtype: string
- name: VOTE
dtype: string
- name: Recorder
dtype: string
- name: link
dtype: string
- name: action
dtype: string
- name: descript
dtype: string
- name: kind
dtype: string
- name: DateTimeDate
dtype: timestamp[ns]
- name: embeds
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8933567
num_examples: 704
download_size: 6645047
dataset_size: 8933567
license: mit
task_categories:
- text-classification
language:
- en
tags:
- politics
- local government
pretty_name: sdcc
size_categories:
- 100K<n<1M
---
# Dataset Card for "pandassdcctest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 957 | [
[
-0.038970947265625,
-0.01629638671875,
-0.00464630126953125,
0.0357666015625,
-0.022796630859375,
0.0001722574234008789,
0.021728515625,
0.00832366943359375,
0.059417724609375,
0.032745361328125,
-0.0645751953125,
-0.04840087890625,
-0.03289794921875,
-0.005... |
BrianWan221/trial | 2023-06-15T20:54:27.000Z | [
"region:us"
] | BrianWan221 | null | null | 0 | 3 | 2023-06-15T17:33:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': HMI
'1': euv
splits:
- name: train
num_bytes: 1341052823.0
num_examples: 81
download_size: 1317503216
dataset_size: 1341052823.0
---
# Dataset Card for "trial"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 457 | [
[
-0.02947998046875,
-0.0121002197265625,
0.02239990234375,
0.0294189453125,
-0.01812744140625,
0.0030689239501953125,
0.023773193359375,
-0.0094451904296875,
0.05303955078125,
0.03851318359375,
-0.05755615234375,
-0.057861328125,
-0.041015625,
-0.015556335449... |
MrbBakh/Sentiment140 | 2023-06-15T18:30:41.000Z | [
"region:us"
] | MrbBakh | null | null | 0 | 3 | 2023-06-15T18:30:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ai-habitat/ReplicaCAD_dataset | 2023-07-12T01:04:41.000Z | [
"license:cc-by-4.0",
"arxiv:2106.14405",
"region:us"
] | ai-habitat | null | null | 0 | 3 | 2023-06-16T16:13:53 | ---
license: cc-by-4.0
viewer: false
---
# What is ReplicaCAD?
[**Vist the ReplicaCAD Homepage**](https://aihabitat.org/datasets/replica_cad/)
The ReplicaCAD dataset is an artist recreation of the scanned “FRL apartment” variations from the Replica dataset.
This dataset is intended for use in the Habitat simulator for embodied in-home interaction tasks such as object re-arrangement.
All materials are licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) Public License](https://creativecommons.org/licenses/by/4.0/).
## Dataset Contents:
We provide two dataset downloads with different properties: one suited to interactive simulation and the other for photorealistic visualization.
Note: Both downloadable datasets contain 84 of the 105 variations described in the paper and shown in the video with the remaining 21 scenes (1 macro variation and associated 20 micro variations) withheld as a test set for challenge evaluation.
### ReplicaCAD Interactive (this repository):
[132MB]
Intended for use with a PBR shader. Contains 1 empty scene and 6 re-creations of the scanned “FRL apartment” variations staged with both large furniture and small objects and ready for dynamic simulation in Habitat-sim. Also included are 84 (of 105) artist authored re-arrangements of large furniture (fully static placements except articulations) organized into 5 macro variations (as different tenants may organize the same apartment) each with an additional 20 micro variations (with a few pieces of furniture moved/swapped).
- 90+ 3D object assets with convex collision geometry and physical properties (mass, friction, restitution) as well as receptacle metadata for use generating object clutter (e.g. for rearrangement tasks).
- 6 stage (i.e., static background) assets emptied of all but architectural features (1 each for FRL apartment and the 5 macro variations).
- 6+ URDF assets defining articulated furniture and door properties including receptacle metadata for generating object clutter.
- 1 SceneDataset configuration file which aggregates all config and asset paths for one-line import in Habitat.
- .navmesh files (in navmeshes/ directory) for every scene computed for an agent with 0.3m radius (e.g. appropriate for a Fetch robot base) and additional .navmesh files (in navmeshes_default/ directory) computed with Habitat default agent parameters for optional use.
- 84 + 6 SceneDataset configuration files defining object metadata and scene layouts for easy use in the Habitat simulator referencing the Fetch tuned NavMeshes.
### ReplicaCAD with baked lighting:
[Get ReplicaCAD with baked lighting here](https://huggingface.co/datasets/ai-habitat/ReplicaCAD_baked_lighting) [525MB]
Contains the same 84 (of 105) artist authored re-arrangements of large furniture described in ReplicaCAD Interactive with synthetic global illumination baked into the textures for more photo-realistic visualization. All articulated furniture is included with baked lighting textures, but all other furniture is static.
---
Citing ReplicaCAD
---
Using ReplicaCAD in your research? Please cite the following paper: [arxiv](https://arxiv.org/abs/2106.14405)
```
@inproceedings{szot2021habitat,
title = {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
author = {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
``` | 3,829 | [
[
-0.03424072265625,
-0.052886962890625,
0.0135650634765625,
0.0263671875,
-0.00250244140625,
-0.002033233642578125,
0.0185089111328125,
-0.04412841796875,
0.040374755859375,
0.032196044921875,
-0.049041748046875,
-0.030120849609375,
0.0002810955047607422,
0.0... |
dominguesm/wikipedia-ptbr-20230601 | 2023-07-13T12:31:13.000Z | [
"language:pt",
"region:us"
] | dominguesm | null | null | 3 | 3 | 2023-06-17T18:45:12 | ---
language: pt
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2424207600
num_examples: 993101
- name: test
num_bytes: 269529120
num_examples: 110345
download_size: 1626930291
dataset_size: 2693736720
---
# Dataset Card for "wikipedia-ptbr-20230601"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 551 | [
[
-0.0631103515625,
-0.0170440673828125,
0.01058197021484375,
0.0281982421875,
-0.0302886962890625,
-0.0077972412109375,
0.0201568603515625,
-0.006351470947265625,
0.0498046875,
0.0221710205078125,
-0.05487060546875,
-0.046630859375,
-0.03955078125,
-0.0075836... |
renumics/beans-outlier | 2023-06-30T20:09:45.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:mit",
"region:us"
] | renumics | null | null | 0 | 3 | 2023-06-17T19:58:02 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
pretty_name: Beans
dataset_info:
features:
- name: image_file_path
dtype: string
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': angular_leaf_spot
'1': bean_rust
'2': healthy
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
dtype: image
splits:
- name: train
num_bytes: 293531811.754
num_examples: 1034
download_size: 0
dataset_size: 293531811.754
---
# Dataset Card for "beans-outlier"
📚 This dataset is an enhancved version of the [ibean project of the AIR lab](https://github.com/AI-Lab-Makerere/ibean/).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/beans-outlier>

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/beansoutlier", split="train")
df = ds.to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["labels"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
``` | 2,242 | [
[
-0.038238525390625,
-0.05047607421875,
0.018463134765625,
0.0341796875,
-0.0038928985595703125,
0.003040313720703125,
-0.016815185546875,
-0.007419586181640625,
0.039520263671875,
0.04254150390625,
-0.03778076171875,
-0.042816162109375,
-0.0523681640625,
-0.... |
dmayhem93/agieval-gaokao-english | 2023-06-18T17:19:13.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 3 | 2023-06-18T12:47:58 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 688986
num_examples: 306
download_size: 200843
dataset_size: 688986
license: mit
---
# Dataset Card for "agieval-gaokao-english"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,838 | [
[
-0.01641845703125,
-0.038909912109375,
-0.0001304149627685547,
0.0293731689453125,
-0.0249481201171875,
-0.01273345947265625,
0.0021419525146484375,
-0.034454345703125,
0.0011892318725585938,
0.035552978515625,
-0.042083740234375,
-0.0401611328125,
-0.0414733886... |
dmayhem93/agieval-logiqa-zh | 2023-06-18T17:30:03.000Z | [
"license:cc-by-nc-sa-4.0",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 3 | 2023-06-18T12:49:17 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 694747
num_examples: 651
download_size: 387024
dataset_size: 694747
license: cc-by-nc-sa-4.0
---
# Dataset Card for "agieval-logiqa-zh"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw datset: https://github.com/lgw863/LogiQA-dataset
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{Liu2020LogiQAAC,
title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},
author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},
booktitle={International Joint Conference on Artificial Intelligence},
year={2020}
} | 1,268 | [
[
-0.01100921630859375,
-0.0269927978515625,
0.00888824462890625,
-0.005931854248046875,
-0.02008056640625,
0.005741119384765625,
0.0224151611328125,
-0.037567138671875,
0.006195068359375,
0.02386474609375,
-0.0538330078125,
-0.043243408203125,
-0.02783203125,
... |
zz990906/bird | 2023-06-19T13:13:12.000Z | [
"region:us"
] | zz990906 | null | null | 0 | 3 | 2023-06-19T13:11:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jondurbin/airoboros-gpt4-1.4 | 2023-06-29T08:24:56.000Z | [
"license:other",
"region:us"
] | jondurbin | null | null | 19 | 3 | 2023-06-20T21:33:51 | ---
license: other
---
A continuation (including many fixes) of [gpt4-1.3](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.3), with:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from [rosettacode.org](https://rosettacode.org/) [dataset](https://huggingface.co/datasets/jondurbin/rosettacode-10) thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
_*Note: I did not filter by token length for this dataset, some are well over 2048 so use carefully.*_
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here, and can an open source model really compete in any meaniningful way with gpt-4 quality?
- I am bound by the ToS, but anyone else using the data is not as far as I can tell
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this to train a commercial model, but I will leave that up to you.
I personally don't care how you use this data - it is published to allow others to replicate results, but wouldn't mind some attribution if you do use it. | 1,801 | [
[
-0.024261474609375,
-0.032440185546875,
0.026885986328125,
0.03173828125,
-0.0189971923828125,
-0.0270538330078125,
-0.0134429931640625,
-0.049072265625,
0.0006642341613769531,
0.06549072265625,
-0.040618896484375,
-0.0229339599609375,
-0.0283660888671875,
0... |
jondurbin/rosettacode-10 | 2023-06-21T07:37:59.000Z | [
"license:gfdl",
"region:us"
] | jondurbin | null | null | 2 | 3 | 2023-06-21T07:33:25 | ---
license: gfdl
---
Instruction/response formatted rosettacode.org tasks/solutions for:
- c++
- c
- c#
- go
- java
- javascript
- kotlin
- lua
- python
- ruby | 162 | [
[
0.0031185150146484375,
-0.050811767578125,
0.040771484375,
0.033233642578125,
0.0204925537109375,
0.01509857177734375,
0.0013370513916015625,
-0.010833740234375,
0.0162200927734375,
0.059173583984375,
-0.052032470703125,
-0.034423828125,
-0.0091400146484375,
... |
IIC/livingner3 | 2023-06-21T15:31:48.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"biomedical",
"clinical",
"spanish",
"region:us"
] | IIC | null | null | 0 | 3 | 2023-06-21T14:53:31 | ---
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-label-classification
license:
- cc-by-4.0
pretty_name: LivingNER3
train-eval-index:
- task: text-classification
task_id: multi_label_classification
splits:
train_split: train
eval_split: test
metrics:
- type: f1
name: f1
---
# LivingNER
This is a third party reupload of the [LivingNER](https://temu.bsc.es/livingner/) task 3 dataset.
It only contains the task 3 for the Spanish language. It does not include the multilingual data nor the background data.
This dataset is part of a benchmark in the paper [TODO](TODO).
### Citation Information
```bibtex
TODO
```
### Citation Information of the original dataset
```bibtex
@article{amiranda2022nlp,
title={Mention detection, normalization \& classification of species, pathogens, humans and food in clinical documents: Overview of LivingNER shared task and resources},
author={Miranda-Escalada, Antonio and Farr{'e}-Maduell, Eul{`a}lia and Lima-L{'o}pez, Salvador and Estrada, Darryl and Gasc{'o}, Luis and Krallinger, Martin},
journal = {Procesamiento del Lenguaje Natural},
year={2022}
}
```
| 1,239 | [
[
-0.0009522438049316406,
-0.027984619140625,
0.0134124755859375,
0.033935546875,
-0.0195159912109375,
0.006969451904296875,
-0.02093505859375,
-0.04669189453125,
0.056671142578125,
0.0288238525390625,
-0.028228759765625,
-0.0433349609375,
-0.04998779296875,
0... |
vietgpt/vungoi_question_type1 | 2023-06-22T14:06:07.000Z | [
"region:us"
] | vietgpt | null | null | 0 | 3 | 2023-06-22T01:40:20 | ---
dataset_info:
features:
- name: metadata
struct:
- name: chapter
dtype: string
- name: difficult_degree
dtype: int64
- name: grade
dtype: string
- name: id
dtype: string
- name: idx
dtype: int64
- name: subject
dtype: string
- name: question
dtype: string
- name: options
list:
- name: answer
dtype: string
- name: key
dtype: string
- name: answer
struct:
- name: answer
dtype: string
- name: key
dtype: string
- name: solution
dtype: string
- name: quality
struct:
- name: has_image
dtype: bool
- name: missing_question
dtype: bool
- name: missing_solution
dtype: bool
splits:
- name: train
num_bytes: 140854723
num_examples: 112042
download_size: 88486050
dataset_size: 140854723
---
# Dataset Card for "vungoi_question_type1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,049 | [
[
-0.046600341796875,
-0.032379150390625,
-0.000053882598876953125,
0.0197296142578125,
-0.0232696533203125,
-0.00670623779296875,
0.030914306640625,
-0.003734588623046875,
0.062469482421875,
0.050872802734375,
-0.05755615234375,
-0.04034423828125,
-0.026062011718... |
Romjiik/Russian_bank_reviews | 2023-06-22T21:29:37.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ru",
"finance",
"region:us"
] | Romjiik | null | null | 1 | 3 | 2023-06-22T21:15:12 | ---
task_categories:
- text-classification
language:
- ru
tags:
- finance
pretty_name: bank reviews
size_categories:
- 10K<n<100K
---
# Dataset Card for bank reviews dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset is collected from the [banki.ru](https://www.banki.ru/services/responses/list/?is_countable=on) website.
It contains customer reviews of various banks. In total, the dataset contains 12399 reviews.
The dataset is suitable for sentiment classification.
The dataset contains this fields - bank name, username, review title, review text, review time, number of views,
number of comments, review rating set by the user, as well as ratings for special categories
### Languages
Russian
| 811 | [
[
-0.033599853515625,
-0.033599853515625,
0.0028514862060546875,
0.033111572265625,
-0.046295166015625,
0.006748199462890625,
0.00887298583984375,
-0.006465911865234375,
0.0206451416015625,
0.042449951171875,
-0.04180908203125,
-0.07977294921875,
-0.0316162109375,... |
kejian/arxiv-physics-debug-v0 | 2023-06-22T23:15:47.000Z | [
"region:us"
] | kejian | null | null | 0 | 3 | 2023-06-22T23:15:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
safufu/autotrain-data-based-in-fact | 2023-06-24T03:47:14.000Z | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | safufu | null | null | 0 | 3 | 2023-06-23T07:31:05 | ---
task_categories:
- text-classification
language:
- zh
---
# AutoTrain Dataset for project: based-in-fact
## Dataset Description
This dataset has been automatically processed by AutoTrain for project based-in-fact.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u4e0a\u4e2a\u5927\u5b66\u771f\u7684\u662f\u4ec0\u4e48\u4eba\u90fd\u70b8\u51fa\u6765\u4e86",
"target": 0
},
{
"text": "\u5982\u679c\u4e00\u4e2aHIV\u611f\u67d3\u8005\u5bf9\u4e8e\u6297\u9006\u8f6c\u5f55\u75c5\u6bd2\u836f\u7269\u5341\u5206\u8010\u53d7\uff0c\u90a3\u4e48\u4ed6\u7684\u6cbb\u7597\u4f1a\u53d8\u5f97\u5341\u5206\u590d\u6742\uff0c\u75c5\u60c5\u5c06\u6076\u5316",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['emotion', 'fact'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 30932 |
| valid | 3000 | | 1,254 | [
[
-0.031524658203125,
0.00202178955078125,
0.019378662109375,
0.01151275634765625,
-0.0122528076171875,
0.0186614990234375,
-0.0106048583984375,
-0.01861572265625,
0.0065460205078125,
0.0281219482421875,
-0.047332763671875,
-0.052154541015625,
-0.04083251953125,
... |
DEplain/DEplain-web-sent | 2023-06-23T14:43:38.000Z | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|DEplain-web-doc",
"language:de",
... | DEplain | null | null | 0 | 3 | 2023-06-23T14:30:47 | ---
annotations_creators:
- expert-generated
language:
- de
language_creators:
- expert-generated
license:
- other
multilinguality:
- translation
- monolingual
pretty_name: DEplain-web-sent
size_categories:
- 1K<n<10K
source_datasets:
- extended|DEplain-web-doc
tags:
- sentence simplification
- web-text
- plain language
- easy-to-read language
task_categories:
- text2text-generation
task_ids:
- text-simplification
---
# DEplain-web-sent: A corpus for German Sentence Simplification
DEplain-web-sent is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for evaluation of sentence simplification.
The corpus consists of 1846 sentence pairs of 147 parallel documents crawled from the web in standard German and plain German (or easy-to-read German). All documents are either published under an open license, or the copyright holders gave us permission to share the data.
Human annotators sentence-wise aligned the 147 documents of the test set to build a corpus for sentence simplification. For the document-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-web-doc](https://huggingface.co/datasets/DEplain/DEplain-web-doc).
Due to the small size of the sentence pairs, we only provide a test set for evaluation of text simplification models.
If you are interested in a larger corpus, please check our paper and the provided web crawler and alignment methods to extend the corpus. You can find this data here: [https://github.com/rstodden/DEPlain/](https://github.com/rstodden/DEPlain/tree/main/E__Sentence-level_Corpus/DEplain-web-sent/auto/open).
If you use the automatically aligned data, please use it cautiously, as the alignment quality might be error-prone.
# Dataset Card for DEplain-web-sent
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain)
- **Paper:** ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939)
- **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
### Dataset Summary
[DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned.
This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/).
### Supported Tasks and Leaderboards
The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache).
### Domains
The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts.
## Dataset Structure
### Data Access
- The dataset is licensed with different open licenses dependent on the subcorpora.
### Data Instances
- `document-simplification` configuration: an instance consists of an original document and one reference simplification.
- `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification. Please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
- `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same. Please see [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain/tree/main/C__Alignment_Algorithms)
### Data Fields
| data field | data field description |
|-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| `original` | an original text from the source dataset |
| `simplification` | a simplified text from the source dataset |
| `pair_id` | document pair id |
| `complex_document_id ` (on doc-level) | id of complex document (-1) |
| `simple_document_id ` (on doc-level) | id of simple document (-0) |
| `original_id ` (on sent-level) | id of sentence(s) of the original text |
| `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
| `domain ` | text domain of the document pair |
| `corpus ` | subcorpus name |
| `simple_url ` | origin URL of the simplified document |
| `complex_url ` | origin URL of the simplified document |
| `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
| `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
| `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
| `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
| `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `alignment_location ` | location on hard disk where the alignment is stored |
| `simple_author ` | author (or copyright owner) of the simplified document |
| `complex_author ` | author (or copyright owner) of the original document |
| `simple_title ` | title of the simplified document |
| `complex_title ` | title of the original document |
| `license ` | license of the data |
| `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
| `rater` | id of the rater who annotated the sentence pair |
| `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
### Data Splits
DEplain-web contains a training set, a development set and a test set.
The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)).
Document-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 147 | 147 |
| DEplain-web-auto-open | 199 | 50 | - | 279 |
| DEplain-web-auto-closed | 288 | 72 | - | 360 |
| in total | 487 | 122 | 147 | 756 |
Sentence-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 1846 | 1846 |
| DEplain-web-auto-open | 514 | 138 | - | 652 |
| DEplain-web-auto-closed | 767 | 175 | - | 942 |
| in total | 1281 | 313 | 1846 | |
| **subcorpus** | **simple** | **complex** | **domain** | **description** | **\ doc.** |
|----------------------------------|------------------|------------------|------------------|-------------------------------------------------------------------------------|------------------|
| **EinfacheBücher** | Plain German | Standard German / Old German | fiction | Books in plain German | 15 |
| **EinfacheBücherPassanten** | Plain German | Standard German / Old German | fiction | Books in plain German | 4 |
| **ApothekenUmschau** | Plain German | Standard German | health | Health magazine in which diseases are explained in plain German | 71 |
| **BZFE** | Plain German | Standard German | health | Information of the German Federal Agency for Food on good nutrition | 18 |
| **Alumniportal** | Plain German | Plain German | language learner | Texts related to Germany and German traditions written for language learners. | 137 |
| **Lebenshilfe** | Easy-to-read German | Standard German | accessibility | | 49 |
| **Bibel** | Easy-to-read German | Standard German | bible | Bible texts in easy-to-read German | 221 |
| **NDR-Märchen** | Easy-to-read German | Standard German / Old German | fiction | Fairytales in easy-to-read German | 10 |
| **EinfachTeilhaben** | Easy-to-read German | Standard German | accessibility | | 67 |
| **StadtHamburg** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Hamburg | 79 |
| **StadtKöln** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Cologne | 85 |
: Documents per Domain in DEplain-web.
| domain | avg. | std. | interpretation | \ sents | \ docs |
|------------------|---------------|---------------|-------------------------|-------------------|------------------|
| bible | 0.7011 | 0.31 | moderate | 6903 | 3 |
| fiction | 0.6131 | 0.39 | moderate | 23289 | 3 |
| health | 0.5147 | 0.28 | weak | 13736 | 6 |
| language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 |
| all | 0.8505 | 0.23 | strong | 87645 | 87 |
: Inter-Annotator-Agreement per Domain in DEplain-web-manual.
| operation | documents | percentage |
|-----------|-------------|------------|
| rehphrase | 863 | 11.73 |
| deletion | 3050 | 41.47 |
| addition | 1572 | 21.37 |
| identical | 887 | 12.06 |
| fusion | 110 | 1.5 |
| merge | 77 | 1.05 |
| split | 796 | 10.82 |
| in total | 7355 | 100 |
: Information regarding Simplification Operations in DEplain-web-manual.
## Dataset Creation
### Curation Rationale
Current German text simplification datasets are limited in their size or are only automatically evaluated.
We provide a manually aligned corpus to boost text simplification research in German.
### Source Data
#### Initial Data Collection and Normalization
The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification).
The texts of the documents were manually simplified by professional translators.
The data was split into sentences using a German model of SpaCy.
Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/).
#### Who are the source language producers?
The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939).
### Annotations
#### Annotation process
The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
#### Who are the annotators?
The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
They are not part of any target group of text simplification.
### Personal and Sensitive Information
No sensitive data.
## Considerations for Using the Data
### Social Impact of Dataset
Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
### Discussion of Biases
no bias is known.
### Other Known Limitations
The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information.
## Additional Information
### Dataset Curators
DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''.
### Licensing Information
The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, and CC-BY-NC-ND-4. The corpus also include a "save_use_share" license, for these documents the data provider permitted us to share the data for research purposes.
### Citation Information
```
@inproceedings{stodden-etal-2023-deplain,
title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
author = "Stodden, Regina and
Momen, Omar and
Kallmeyer, Laura",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
notes = "preprint: https://arxiv.org/abs/2305.18939",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite). | 19,051 | [
[
-0.029754638671875,
-0.0675048828125,
0.03143310546875,
0.0318603515625,
-0.0242156982421875,
-0.0212249755859375,
-0.0183563232421875,
-0.016571044921875,
0.0192108154296875,
0.00605010986328125,
-0.06536865234375,
-0.05908203125,
-0.052886962890625,
0.0437... |
ChanceFocus/flare-fiqasa | 2023-08-18T16:24:08.000Z | [
"region:us"
] | ChanceFocus | null | null | 1 | 3 | 2023-06-25T12:34:35 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 248828
num_examples: 750
- name: valid
num_bytes: 61667
num_examples: 188
- name: test
num_bytes: 77672
num_examples: 235
download_size: 0
dataset_size: 388167
---
# Dataset Card for "flare-fiqasa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.055877685546875,
-0.014404296875,
-0.001445770263671875,
0.0158843994140625,
-0.009124755859375,
0.01364898681640625,
0.025360107421875,
-0.013031005859375,
0.0701904296875,
0.0294342041015625,
-0.060943603515625,
-0.0391845703125,
-0.026519775390625,
-0.... |
eduagarcia/cc_news_pt | 2023-06-25T17:42:37.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"size_categories:1B<n<10B",
"language:pt",
"license:unknown",
... | eduagarcia | null | null | 1 | 3 | 2023-06-25T16:56:08 | ---
pretty_name: CC-News-PT
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- unknown
size_categories:
- 1B<n<10B
task_categories:
- text-generation
- fill-mask
- text2text-generation
task_ids:
- language-modeling
- masked-language-modeling
---
### Dataset Summary
CC-News-PT is a curation of news articles from CommonCrawl News in the Portuguese language.
CommonCrawl News is a dataset containing news articles from news sites all over the world.
The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/.
This version of the dataset is the portuguese subset from [CloverSearch/cc-news-mutlilingual](https://huggingface.co/datasets/CloverSearch/cc-news-mutlilingual).
### Data Fields
- `title`: a `string` feature.
- `text`: a `string` feature.
- `authors`: a `string` feature.
- `domain`: a `string` feature.
- `date`: a `string` feature.
- `description`: a `string` feature.
- `url`: a `string` feature.
- `image_url`: a `string` feature.
- `date_download`: a `string` feature.
### How to use this dataset
```python
from datasets import load_dataset
dataset = load_dataset("eduagarcia/cc_news_pt", split="train")
```
### Cite
```
@misc{Acerola2023,
author = {Garcia, E.A.S.},
title = {Acerola Corpus: Towards Better Portuguese Language Models},
year = {2023},
doi = {10.57967/hf/0814}
}
``` | 1,388 | [
[
-0.0186767578125,
-0.0384521484375,
0.02154541015625,
0.0275115966796875,
-0.052276611328125,
0.004093170166015625,
-0.02044677734375,
-0.0191192626953125,
0.04779052734375,
0.0390625,
-0.048065185546875,
-0.07958984375,
-0.04095458984375,
0.022552490234375,... |
ChanceFocus/flare-sm-cikm | 2023-06-25T18:16:45.000Z | [
"region:us"
] | ChanceFocus | null | null | 1 | 3 | 2023-06-25T17:56:12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 26082681
num_examples: 3396
- name: valid
num_bytes: 3231915
num_examples: 431
- name: test
num_bytes: 8123670
num_examples: 1143
download_size: 19175558
dataset_size: 37438266
---
# Dataset Card for "flare-sm-cikm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 651 | [
[
-0.05364990234375,
-0.0062408447265625,
0.01241302490234375,
0.01364898681640625,
-0.0135650634765625,
0.0181732177734375,
0.0103912353515625,
-0.00997161865234375,
0.074462890625,
0.0328369140625,
-0.0706787109375,
-0.04412841796875,
-0.036407470703125,
-0.... |
nRuaif/tinystories-gpt4 | 2023-06-26T07:01:26.000Z | [
"region:us"
] | nRuaif | null | null | 0 | 3 | 2023-06-26T06:44:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
globis-university/aozorabunko-clean | 2023-10-27T13:22:32.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | globis-university | null | null | 4 | 3 | 2023-06-26T13:31:28 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- ja
size_categories:
- 10K<n<100K
---
# Overview
This dataset provides a convenient and user-friendly format of data from [Aozora Bunko (青空文庫)](https://www.aozora.gr.jp/), a website that compiles public-domain books in Japan, ideal for Machine Learning applications.
[For Japanese] 日本語での概要説明を Qiita に記載しました: https://qiita.com/akeyhero/items/b53eae1c0bc4d54e321f
# Methodology
The code to reproduce this dataset is made available on GitHub: [globis-org/aozorabunko-exctractor](https://github.com/globis-org/aozorabunko-extractor).
## 1. Data collection
We firstly downloaded the [CSV file that lists all works](https://www.aozora.gr.jp/index_pages/person_all.html). The information extracted from this CSV is incorporated into the `meta` field.
Next, we filtered out any books not categorized as public domain.
We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the `text` field in UTF-8.
## 2. Deduplication
We removed entries where the `図書カードURL` (Library card URL) in this CSV did not match with the `作品ID` (Work ID) and `人物ID` (Person ID).
In addition, entries with text identical to previously encountered text were discarded.
## 3. Cleaning
The data in the `text` field was then cleaned in the following sequence:
1. Convert new lines to `\n`
2. Remove headers
3. Remove footnotes and add them to the `footnote` field
4. Convert inserted notes into regular parenthetical text
5. Remove ruby (phonetic guides)
6. Convert specific characters, such as external characters and iteration marks, into standard Unicode characters
7. Remove any remaining markup
8. Remove leading and trailing new lines and horizontal rules
# Tips
If you prefer to employ only modern Japanese, you can filter entries with: `row["meta"]["文字遣い種別"] == "新字新仮名"`.
# Example
```py
>>> from datasets import load_dataset
>>> ds = load_dataset('globis-university/aozorabunko-clean')
>>> ds
DatasetDict({
train: Dataset({
features: ['text', 'footnote', 'meta'],
num_rows: 16951
})
})
>>> ds = ds.filter(lambda row: row['meta']['文字遣い種別'] == '新字新仮名') # only modern Japanese
>>> ds
DatasetDict({
train: Dataset({
features: ['text', 'footnote', 'meta'],
num_rows: 10246
})
})
>>> book = ds['train'][0] # one of the works
>>> book['meta']['作品名']
'ウェストミンスター寺院'
>>> text = book['text'] # main content
>>> len(text)
10639
>>> print(text[:100])
深いおどろきにうたれて、
名高いウェストミンスターに
真鍮や石の記念碑となって
すべての王侯貴族が集まっているのをみれば、
今はさげすみも、ほこりも、見栄もない。
善にかえった貴人の姿、
華美と俗世の
```
# License
CC BY 4.0 | 2,637 | [
[
-0.0191192626953125,
-0.049041748046875,
0.024139404296875,
0.0007190704345703125,
-0.03521728515625,
-0.01308441162109375,
-0.0228118896484375,
-0.024566650390625,
0.03411865234375,
0.061798095703125,
-0.0312347412109375,
-0.07183837890625,
-0.02197265625,
... |
hssd/hssd-scenes | 2023-06-26T15:31:31.000Z | [
"language:en",
"license:cc-by-nc-4.0",
"3D scenes",
"Embodied AI",
"region:us"
] | hssd | null | null | 2 | 3 | 2023-06-26T15:16:30 | ---
language:
- en
pretty_name: HSSD
tags:
- 3D scenes
- Embodied AI
license: cc-by-nc-4.0
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: "You agree to use this dataset under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/) terms"
---
HSSD: Habitat Synthetic Scenes Dataset
==================================
The [Habitat Synthetic Scenes Dataset (HSSD)](https://3dlg-hcvc.github.io/hssd/) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets.
Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects.
<img src="https://i.imgur.com/XEkLxNs.png" width=50%>
| 740 | [
[
-0.0413818359375,
-0.031341552734375,
0.04364013671875,
0.007419586181640625,
-0.0206298828125,
0.0202178955078125,
0.0313720703125,
-0.01947021484375,
0.035369873046875,
0.04388427734375,
-0.08428955078125,
-0.0528564453125,
0.00296783447265625,
0.016098022... |
ahishamm/PH2_db_sharpened | 2023-06-26T18:42:54.000Z | [
"region:us"
] | ahishamm | null | null | 0 | 3 | 2023-06-26T18:42:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': benign
'1': malignant
splits:
- name: train
num_bytes: 198028776.0
num_examples: 200
- name: test
num_bytes: 39610475.0
num_examples: 40
download_size: 237654095
dataset_size: 237639251.0
---
# Dataset Card for "PH2_db_sharpened"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.02471923828125,
-0.031707763671875,
0.0157623291015625,
0.00736236572265625,
-0.0196990966796875,
-0.0132598876953125,
0.0184478759765625,
0.0026950836181640625,
0.043609619140625,
0.045989990234375,
-0.06243896484375,
-0.047637939453125,
-0.017181396484375,
... |
IsDeeCee/StoryMaker | 2023-06-26T22:49:18.000Z | [
"region:us"
] | IsDeeCee | null | null | 1 | 3 | 2023-06-26T22:47:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ecnu-icalk/educhat-sft-002-data-osm | 2023-07-01T10:11:46.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | ecnu-icalk | null | null | 13 | 3 | 2023-06-27T07:48:28 | ---
license: cc-by-nc-4.0
---
每条数据由一个存放对话的list和与数据对应的system_prompt组成。list中按照Q,A顺序存放对话。
数据来源为开源数据,使用[CleanTool](https://github.com/icalk-nlp/EduChat/tree/main/clean_tool)数据清理工具去重。 | 178 | [
[
-0.0225372314453125,
-0.0355224609375,
0.0026092529296875,
0.007564544677734375,
-0.047210693359375,
0.007587432861328125,
0.0238800048828125,
-0.011566162109375,
0.03387451171875,
0.0221710205078125,
-0.040191650390625,
-0.032318115234375,
-0.0222930908203125,
... |
Whab/deepfake | 2023-06-27T08:13:05.000Z | [
"region:us"
] | Whab | null | null | 0 | 3 | 2023-06-27T08:11:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Fake
'1': Real
splits:
- name: train
num_bytes: 1553838685.12
num_examples: 179430
download_size: 1677949725
dataset_size: 1553838685.12
---
# Dataset Card for "deepfake"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 468 | [
[
-0.045318603515625,
-0.019989013671875,
0.02325439453125,
0.0143280029296875,
-0.0122833251953125,
0.011810302734375,
0.0207977294921875,
-0.0107269287109375,
0.050384521484375,
0.03973388671875,
-0.07275390625,
-0.063720703125,
-0.03857421875,
-0.0316772460... |
notrichardren/misconceptions_tf | 2023-06-28T12:58:13.000Z | [
"region:us"
] | notrichardren | null | null | 0 | 3 | 2023-06-27T20:10:51 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Topic
dtype: string
- name: Question
dtype: string
- name: Correct
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 181025
num_examples: 1703
download_size: 83862
dataset_size: 181025
---
# Dataset Card for "misconceptions_tf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 515 | [
[
-0.03594970703125,
-0.032012939453125,
0.0172576904296875,
0.0250091552734375,
-0.0084686279296875,
0.00553131103515625,
0.0255889892578125,
-0.01314544677734375,
0.032958984375,
0.0246429443359375,
-0.058258056640625,
-0.0616455078125,
-0.053741455078125,
-... |
eric-math123/instruct_addition | 2023-06-27T23:06:31.000Z | [
"region:us"
] | eric-math123 | null | null | 0 | 3 | 2023-06-27T23:03:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
PhysHunter/github-datasets-issues | 2023-06-28T17:48:07.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"region:us"
] | PhysHunter | null | null | 0 | 3 | 2023-06-28T17:17:55 | ---
annotations_creators: []
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: HuggingFace Datasets GitHub Issues
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
---
# Dataset Summary
HuggingFace Datasets GitHub Issues is a dataset consisting of issues and pullrequests associated with HuggingFace Datasets repository on GitHub. | 522 | [
[
-0.027099609375,
-0.0479736328125,
-0.006641387939453125,
0.0149993896484375,
0.0147857666015625,
0.03021240234375,
0.00586700439453125,
-0.017486572265625,
0.0728759765625,
0.041748046875,
-0.06695556640625,
-0.01361846923828125,
-0.0357666015625,
-0.003889... |
archanatikayatray/ASRS-ChatGPT | 2023-09-10T04:23:49.000Z | [
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"ChatGPT",
"doi:10.57967/hf/0830",
"region:us"
] | archanatikayatray | null | null | 2 | 3 | 2023-06-29T13:08:26 | ---
license: apache-2.0
task_categories:
- zero-shot-classification
- text-generation
- summarization
- question-answering
language:
- en
tags:
- ChatGPT
size_categories:
- 1K<n<10K
---
# Dataset Card for aeroBERT-NER
## Dataset Description
- **Paper:** Examining the Potential of Generative Language Models for Aviation Safety Analysis: Insights from ASRS Case Study
- **Point of Contact:** archanatikayatray@gmail.com
### Dataset Summary
The dataset contains a total of 9984 incident records and 9 columns. Some of the columns contain ground truth values whereas others contain information generated by ChatGPT based on the incident _**Narratives**_.
The creation of this dataset is aimed at providing researchers with columns generated by using ChatGPT API which is not freely available.
## Dataset Structure
The column names present in the dataset and their descriptions are provided below:
|Column Name|Description|Generated by|
| :----: | :----: | :----: |
ACN | Unique identifier for incident reports | - |
Narrative | Incident narrative | Reporter |
synopsis_groundtruth | Synopsis of the incident | Safety Analyst |
(GPT-3.5-turbo) Synopsis | Synopsis generated by ChatGPT based on narrative | ChatGPT |
human_factors_groundtruth | Human factor issues that contributed to the incident | Safety Analyst |
(GPT-3.5-turbo) Human Factor issue| Human factor issue that contributed to the incident identified by ChatGPT based on incident narrative | ChatGPT |
(GPT-3.5-turbo) Rationale - Human Factor issue | Rationale behind human factor issue identified by ChatGPT | ChatGPT |
(GPT-3.5-turbo) Incident attribution | Incident attribution identified by ChatGPT based on incident narrative | ChatGPT |
(GPT-3.5-turbo) Rationale - Incident attribution | Rationale behind incident attribution by ChatGPT | ChatGPT |
## Dataset Creation
### Source Data
The initial dataset was obtained from the Aviation Safety Reporting System (ASRS) database and comprises incident reports that encompass the time period from January 2009 to July 2022.
This was followed by retaining only the records where the _**Primary Problem**_ that led to the incident was _**Human Factors**_.
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/ASRS-ChatGPT")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"])
dataset = dataset.astype({'ACN':'string'})
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Limitations
Certain columns within this dataset include information generated by ChatGPT and therefore may not be entirely accurate. Consequently, it is advised to exercise caution when utilizing the generated data for decision making purposes.
### Citation Information
```
@Article{TikayatRay-ASRS,
AUTHOR = {Tikayat Ray, Archana and Bhat, Anirudh Prabhakara and White, Ryan T. and Nguyen, Van Minh and Pinon Fischer, Olivia J. and Mavris, Dimitri N.},
TITLE = {Examining the Potential of Generative Language Models for Aviation Safety Analysis: Case Study and Insights Using the Aviation Safety Reporting System (ASRS)},
JOURNAL = {Aerospace},
VOLUME = {10},
YEAR = {2023},
NUMBER = {9},
ARTICLE-NUMBER = {770},
URL = {https://www.mdpi.com/2226-4310/10/9/770},
ISSN = {2226-4310},
DOI = {10.3390/aerospace10090770}
}
``` | 3,561 | [
[
-0.006103515625,
-0.0455322265625,
0.0105743408203125,
0.0301055908203125,
-0.01367950439453125,
0.0029010772705078125,
0.00553131103515625,
-0.023529052734375,
-0.00637054443359375,
0.02569580078125,
-0.037322998046875,
-0.03363037109375,
-0.035919189453125,
... |
JacquesVlaming/Questions_Answers | 2023-06-30T11:08:49.000Z | [
"region:us"
] | JacquesVlaming | null | null | 0 | 3 | 2023-06-30T11:08:39 | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 921084
num_examples: 976
- name: validation
num_bytes: 111135
num_examples: 108
download_size: 221671
dataset_size: 1032219
---
# Dataset Card for "Questions_Answers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.057037353515625,
-0.04205322265625,
0.021881103515625,
0.00946807861328125,
-0.00933837890625,
-0.004039764404296875,
0.0252532958984375,
-0.0021228790283203125,
0.0560302734375,
0.034454345703125,
-0.062469482421875,
-0.040771484375,
-0.0300140380859375,
... |
nishantup/langchain-docs | 2023-07-01T06:33:20.000Z | [
"region:us"
] | nishantup | null | null | 0 | 3 | 2023-07-01T06:32:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Jalilov/document-segment | 2023-07-01T20:31:44.000Z | [
"region:us"
] | Jalilov | null | null | 0 | 3 | 2023-07-01T20:23:23 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 105189330.0
num_examples: 100
download_size: 0
dataset_size: 105189330.0
---
# Dataset Card for "document-segment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 399 | [
[
-0.045166015625,
-0.0306549072265625,
0.033447265625,
0.00934600830078125,
-0.0223388671875,
0.0039825439453125,
0.0234832763671875,
-0.006526947021484375,
0.057373046875,
0.037322998046875,
-0.06298828125,
-0.055877685546875,
-0.042083740234375,
-0.02821350... |
YeungNLP/WizardLM_evol_instruct_V2_143k | 2023-07-02T06:50:48.000Z | [
"region:us"
] | YeungNLP | null | null | 8 | 3 | 2023-07-02T06:17:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Veucci/lyric-to-3genre | 2023-07-04T14:10:50.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"music",
"region:us"
] | Veucci | null | null | 1 | 3 | 2023-07-02T13:02:31 | ---
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
language:
- en
tags:
- music
---
# Song Lyrics Dataset
## Description
This dataset contains a collection of song lyrics from various artists and genres in english. It is intended to be used for research, analysis, and other non-commercial purposes.
## Dataset Details
The dataset is organized in a tabular format with the following columns:
- `Genre` (int): Genre of the lyrics
- `Lyrics` (str): The lyrics of the song.
- Pop: 979 rows
- Rock: 995 rows
- Hip-Hop: 1040 rows
## Usage
Feel free to use this dataset for non-commercial purposes such as academic research, natural language processing tasks, sentiment analysis, or personal projects. You are allowed to analyze, modify, and derive insights from the dataset.
If you use this dataset in your work, we kindly request that you provide attribution by citing this repository or linking back to it.
## License
This dataset is released under the Creative Commons Attribution-NonCommercial license. This means that you are not allowed to use the dataset for commercial purposes. For detailed information about the license, please refer to the [LICENSE](./LICENSE) file.
## Contact
If you have any questions, suggestions, or concerns regarding this dataset, please feel free to reach out to email at [efe.ozkan732@gmail.com](mailto:efe.ozkan732@gmail.com).
Happy exploring and analyzing the world of song lyrics!
| 1,473 | [
[
-0.0168609619140625,
-0.027984619140625,
0.00374603271484375,
0.05194091796875,
-0.01308441162109375,
0.00336456298828125,
-0.01343536376953125,
-0.0165557861328125,
0.03363037109375,
0.07159423828125,
-0.06365966796875,
-0.06915283203125,
-0.037322998046875,
... |
Symato/c4_vi-filtered_200GB | 2023-07-03T11:53:47.000Z | [
"region:us"
] | Symato | null | null | 0 | 3 | 2023-07-03T08:35:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AhmedBou/clinical_terms_synonyms | 2023-07-21T15:18:20.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | AhmedBou | null | null | 0 | 3 | 2023-07-03T09:57:10 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
A dataset consisting of 359 clinical trial terms, each accompanied by a list of synonyms. | 215 | [
[
-0.0009026527404785156,
-0.00904083251953125,
0.0290679931640625,
0.025299072265625,
-0.0296630859375,
0.01519012451171875,
-0.0029277801513671875,
0.03125,
0.0494384765625,
0.059814453125,
-0.0183868408203125,
-0.031768798828125,
-0.061737060546875,
0.02256... |
Senem/Nostalgic_Sentiment_Analysis_of_YouTube_Comments_Data | 2023-10-03T12:49:45.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:afl-3.0",
"youtube comments",
"nostalgia",
"nlp",
"music",
"sentiment analysis",
"region:us"
] | Senem | null | null | 1 | 3 | 2023-07-03T18:56:02 | ---
language:
- en
license: afl-3.0
task_categories:
- text-classification
tags:
- youtube comments
- nostalgia
- nlp
- music
- sentiment analysis
size_categories:
- 1K<n<10K
paper:
- Comparison of Neural Network Models for Nostalgic Sentiment Analysis of YouTube Comments
---
# Dataset Summary
+ The dataset is a collection of Youtube Comments and it was captured using the YouTube Data API.
+ The data set consists of 1500 nostalgic and non-nostalgic comments in English.
# Languages
The language of the data is English.
# Citation
If you find this dataset usefull for your study, please cite the paper as followed:
```bibtex
@article{postalcioglu2020comparison,
title={Comparison of Neural Network Models for Nostalgic Sentiment Analysis of YouTube Comments},
author={Postalcioglu, Seda and Aktas, Senem},
journal={Hittite Journal of Science and Engineering},
volume={7},
number={3},
pages={215--221},
year={2020},
publisher={Hitit University}
}
``` | 978 | [
[
-0.04315185546875,
-0.03192138671875,
0.001758575439453125,
0.01348114013671875,
-0.024688720703125,
-0.01983642578125,
-0.017578125,
0.0018339157104492188,
0.049591064453125,
0.034088134765625,
-0.061614990234375,
-0.035430908203125,
-0.03216552734375,
0.02... |
bias-amplified-splits/qqp | 2023-07-04T11:47:36.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:1804.07461",
"region:us"
] | bias-amplified-splits | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | 0 | 3 | 2023-07-03T21:05:01 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42391456
num_examples: 297735
- name: train.anti_biased
num_bytes: 8509364
num_examples: 66111
- name: validation.biased
num_bytes: 4698206
num_examples: 32968
- name: validation.anti_biased
num_bytes: 955548
num_examples: 7462
download_size: 70726976
dataset_size: 56554574
- config_name: partial_input
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42788212
num_examples: 297735
- name: train.anti_biased
num_bytes: 8112608
num_examples: 66111
- name: validation.biased
num_bytes: 4712327
num_examples: 33084
- name: validation.anti_biased
num_bytes: 941427
num_examples: 7346
download_size: 70726976
dataset_size: 56554574
task_categories:
- text-classification
language:
- en
pretty_name: Quora Questions Pairs
---
# Dataset Card for Bias-amplified Splits for QQP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [GLUE](https://arxiv.org/abs/1804.07461)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to the Quora Question Pairs dataset (QQP), a dataset composed of question pairs where the task is to determine if the questions are paraphrases of each other (have the same meaning).
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 77.6 |
| Biased training split | 87.0 | 36.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 81.3 |
| Biased training split | 90.3 | 63.9 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/qqp", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from QQP (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 56,
"question1": "How do I buy used car in India?",
"question2": "Which used car should I buy in India?",
"label": 0
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation set)
- `question1`: a question asked on Quora
- `question2`: a question asked on Quora
- `label`: one of `0` and `1` (`not duplicate` and `duplicate`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 32968 |
| Validation - anti-biased | 7462 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 33084 |
| Validation - anti-biased | 7346 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
QQP data was released by Quora and released under the GLUE benchmark.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` | 9,591 | [
[
-0.0513916015625,
-0.050384521484375,
0.004245758056640625,
-0.000537872314453125,
-0.0298919677734375,
-0.0051422119140625,
0.00009256601333618164,
-0.019805908203125,
0.016571044921875,
0.020751953125,
-0.05401611328125,
-0.0292816162109375,
-0.042083740234375... |
llm-book/aio_from_tohoku | 2023-10-25T15:31:29.000Z | [
"region:us"
] | llm-book | null | null | 0 | 3 | 2023-07-04T04:52:32 | ---
dataset_info:
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
list: string
splits:
- name: train
num_bytes: 9464003
num_examples: 22335
- name: validation
num_bytes: 409779
num_examples: 1000
download_size: 2267163
dataset_size: 9873782
---
# Dataset Card for llm-book/aio
書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのQAデータセットです。
GitHub リポジトリ [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) で公開されているデータセットを利用しています。
## Licence
本データセットに含まれる一部のクイズ問題の著作権は [abc/EQIDEN 実行委員会](https://abc-dive.com/portal/)に帰属するものであり、これらのクイズ問題は本書における使用許諾を得ているものです。
本データセットに含まれる一部のクイズ問題は[株式会社キュービック](http://www.qbik.co.jp/)および[株式会社カプリティオ](https://capriccio.tokyo/)に依頼し作成したものであり、これらのクイズ問題は[クリエイティブ・コモンズ表示・継承ライセンス 4.0 (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) ライセンスの下に提供されています。
本データセットにパッセージとして付与されている Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。
クイズ問題のライセンスについて、詳細は [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) を参照してください。
| 1,502 | [
[
-0.0328369140625,
-0.045166015625,
0.0233306884765625,
-0.0026416778564453125,
-0.04180908203125,
-0.007312774658203125,
-0.0120849609375,
-0.0135650634765625,
0.01873779296875,
0.033966064453125,
-0.058624267578125,
-0.07672119140625,
-0.0204010009765625,
0... |
ranWang/un_pdf_random_preprocessed | 2023-07-05T04:02:00.000Z | [
"region:us"
] | ranWang | null | null | 0 | 3 | 2023-07-05T03:14:36 | ---
dataset_info:
features:
- name: zh
dtype: string
- name: en
dtype: string
- name: fr
dtype: string
- name: es
dtype: string
- name: ru
dtype: string
- name: record
dtype: string
splits:
- name: train
num_bytes: 4169741628
num_examples: 15293
download_size: 1988954290
dataset_size: 4169741628
---
# Dataset Card for "un_pdf_random_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.0322265625,
-0.014068603515625,
0.0158538818359375,
0.0204620361328125,
-0.033905029296875,
-0.006618499755859375,
0.0064544677734375,
0.0010128021240234375,
0.0487060546875,
0.05084228515625,
-0.053131103515625,
-0.0614013671875,
-0.0435791015625,
-0.006... |
AlexWortega/flan_translated_300k | 2023-07-28T20:33:06.000Z | [
"task_categories:question-answering",
"language:ru",
"license:mit",
"region:us"
] | AlexWortega | null | null | 1 | 3 | 2023-07-05T11:09:45 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 332929649
num_examples: 373184
download_size: 144707378
dataset_size: 332929649
license: mit
task_categories:
- question-answering
language:
- ru
---
# Dataset Card for "flan_translated_300k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Citation
```
@MISC{AlexWortega/flan_translated_300k,
author = {Pavel Ilin, Ksenia Zolian,Ilya kuleshov, Egor Kokush, Aleksandr Nikolich},
title = {Russian Flan translated},
url = {https://huggingface.co/datasets/AlexWortega/flan_translated_300k},
year = 2023
}
``` | 767 | [
[
-0.023529052734375,
-0.0167999267578125,
0.01861572265625,
0.0194244384765625,
-0.0245208740234375,
-0.0216827392578125,
-0.0149688720703125,
-0.0306396484375,
0.042816162109375,
0.0206756591796875,
-0.046966552734375,
-0.043792724609375,
-0.034210205078125,
... |
cfierro/mutability_classifier_data | 2023-09-08T15:17:36.000Z | [
"region:us"
] | cfierro | null | null | 0 | 3 | 2023-07-05T12:40:40 | ---
dataset_info:
features:
- name: relation
dtype: string
- name: template
dtype: string
- name: subject
dtype: string
- name: answers
sequence: string
- name: is_mutable
dtype: int64
- name: sub_uri
sequence: string
splits:
- name: train
num_bytes: 457254
num_examples: 3940
- name: validation
num_bytes: 359168
num_examples: 2925
- name: test
num_bytes: 552292
num_examples: 4187
download_size: 466173
dataset_size: 1368714
---
# Dataset Card for "mutability_classifier_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 685 | [
[
-0.03680419921875,
-0.0164947509765625,
0.014495849609375,
0.014892578125,
-0.0012769699096679688,
0.01061248779296875,
0.01416778564453125,
-0.0120849609375,
0.052764892578125,
0.021087646484375,
-0.039642333984375,
-0.04022216796875,
-0.045196533203125,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.