id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Asad321/irfan-junejo-tweerts | 2023-10-08T13:51:35.000Z | [
"region:us"
] | Asad321 | null | null | 0 | 40 | 2023-10-08T13:51:28 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 42301
num_examples: 126
download_size: 14643
dataset_size: 42301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "irfan-junejo-tweerts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 441 | [
[
-0.032135009765625,
-0.007587432861328125,
-0.0106964111328125,
0.037750244140625,
-0.0217742919921875,
0.0083770751953125,
0.00887298583984375,
-0.01139068603515625,
0.07135009765625,
0.0264739990234375,
-0.067626953125,
-0.06317138671875,
-0.057220458984375,
... |
minh21/COVID-QA-Chunk-64-testset-biencoder-data-65_25_10-v2 | 2023-10-09T11:18:22.000Z | [
"region:us"
] | minh21 | null | null | 0 | 40 | 2023-10-09T03:47:51 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 13595044
num_examples: 203
download_size: 0
dataset_size: 13595044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-Chunk-64-testset-biencoder-data-65_25_10-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 665 | [
[
-0.03033447265625,
-0.0146331787109375,
-0.0024204254150390625,
0.021514892578125,
-0.0286102294921875,
-0.00511932373046875,
0.036102294921875,
-0.01629638671875,
0.04547119140625,
0.010650634765625,
-0.045257568359375,
-0.036346435546875,
-0.037445068359375,
... |
Rewcifer/radio-llama2-resp_tag | 2023-10-12T02:40:26.000Z | [
"region:us"
] | Rewcifer | null | null | 0 | 40 | 2023-10-12T02:40:23 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5416871
num_examples: 1000
download_size: 1250589
dataset_size: 5416871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "radio-llama2-resp_tag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 449 | [
[
-0.046875,
0.013397216796875,
0.019287109375,
0.0211029052734375,
-0.024688720703125,
0.00998687744140625,
0.0123138427734375,
-0.026123046875,
0.06878662109375,
0.0188446044921875,
-0.06036376953125,
-0.044586181640625,
-0.038818359375,
-0.00965118408203125... |
shossain/govreport-qa-no-pad-32768 | 2023-10-13T02:45:56.000Z | [
"region:us"
] | shossain | null | null | 0 | 40 | 2023-10-13T02:45:14 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 931486852.8093396
num_examples: 7706
download_size: 286389043
dataset_size: 931486852.8093396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-no-pad-32768"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 563 | [
[
-0.0408935546875,
-0.0178070068359375,
0.02685546875,
0.02044677734375,
-0.0191802978515625,
0.0007166862487792969,
0.038970947265625,
0.00567626953125,
0.0654296875,
0.043975830078125,
-0.045989990234375,
-0.05889892578125,
-0.0301361083984375,
-0.012573242... |
Elsy-Kimaka/MVC_Swahili | 2023-10-14T15:15:40.000Z | [
"region:us"
] | Elsy-Kimaka | null | null | 0 | 40 | 2023-10-14T10:22:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
promptora11/train1 | 2023-10-18T07:16:10.000Z | [
"region:us"
] | promptora11 | null | null | 0 | 40 | 2023-10-18T07:16:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 372714
num_examples: 191
download_size: 241786
dataset_size: 372714
---
# Dataset Card for "train1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 510 | [
[
-0.044586181640625,
-0.0031986236572265625,
0.005672454833984375,
0.025238037109375,
-0.0093841552734375,
-0.01050567626953125,
0.0243377685546875,
-0.004730224609375,
0.060089111328125,
0.0243072509765625,
-0.070556640625,
-0.04144287109375,
-0.043609619140625,... |
Eitanli/recipes | 2023-10-24T12:40:20.000Z | [
"region:us"
] | Eitanli | null | null | 0 | 40 | 2023-10-24T12:40:15 | ---
dataset_info:
features:
- name: recipe
dtype: string
splits:
- name: train
num_bytes: 105767040
num_examples: 74465
download_size: 53711472
dataset_size: 105767040
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 443 | [
[
-0.0290679931640625,
-0.0177459716796875,
0.020355224609375,
0.0200347900390625,
0.00850677490234375,
-0.0035877227783203125,
0.01904296875,
-0.004840850830078125,
0.0706787109375,
0.0526123046875,
-0.06341552734375,
-0.0582275390625,
-0.046600341796875,
-0.... |
atmallen/qm_bob_mixture_1.0e_0.5p_finetuning | 2023-10-27T05:48:11.000Z | [
"region:us"
] | atmallen | null | null | 0 | 40 | 2023-10-27T05:48:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: choices
sequence: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: true_label
dtype: bool
splits:
- name: train
num_bytes: 18841148
num_examples: 200000
- name: validation
num_bytes: 1903144
num_examples: 20000
- name: test
num_bytes: 1902969
num_examples: 20000
download_size: 6338810
dataset_size: 22647261
---
# Dataset Card for "qm_bob_mixture_1.0e_0.5p_finetuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 872 | [
[
-0.051727294921875,
-0.0253448486328125,
0.0053558349609375,
0.034759521484375,
-0.0298614501953125,
0.00963592529296875,
0.0220947265625,
0.0032634735107421875,
0.06121826171875,
0.046478271484375,
-0.050628662109375,
-0.05401611328125,
-0.03167724609375,
-... |
MaNaN-3/twitter_sentiment_analysis | 2023-10-29T07:35:22.000Z | [
"region:us"
] | MaNaN-3 | null | null | 0 | 40 | 2023-10-29T06:32:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Adi-0-0-Gupta/Cauliflower | 2023-10-31T08:20:12.000Z | [
"region:us"
] | Adi-0-0-Gupta | null | null | 0 | 40 | 2023-10-31T08:19:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: label
dtype: int64
splits:
- name: train
num_bytes: 281232787
num_examples: 2819
- name: valid
num_bytes: 9368108
num_examples: 95
download_size: 288189220
dataset_size: 290600895
---
# Dataset Card for "Cauliflower"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 648 | [
[
-0.0258941650390625,
-0.03985595703125,
0.008148193359375,
0.018798828125,
0.00843048095703125,
-0.00980377197265625,
0.0025272369384765625,
-0.0199737548828125,
0.06475830078125,
0.0286407470703125,
-0.055206298828125,
-0.05084228515625,
-0.056182861328125,
... |
yuvalkirstain/task_prediction_train2 | 2023-10-31T18:48:49.000Z | [
"region:us"
] | yuvalkirstain | null | null | 0 | 40 | 2023-10-31T18:48:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: path
dtype: string
- name: text
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 659890949
num_examples: 5663600
- name: validation
num_bytes: 7823929
num_examples: 60002
download_size: 148156628
dataset_size: 667714878
---
# Dataset Card for "task_prediction_train2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 648 | [
[
-0.01557159423828125,
-0.002960205078125,
0.0131072998046875,
0.02117919921875,
-0.0038623809814453125,
-0.01398468017578125,
0.01169586181640625,
-0.021484375,
0.034027099609375,
0.0245819091796875,
-0.06158447265625,
-0.03106689453125,
-0.0579833984375,
-0... |
flax-sentence-embeddings/stackexchange_xml | 2021-07-26T01:38:48.000Z | [
"region:us"
] | flax-sentence-embeddings | null | null | 1 | 39 | 2022-03-02T23:29:22 | This is a dump of the files from
https://archive.org/details/stackexchange
downloaded via torrent on 2021-07-01.
Publication date 2021-06-07 \
Usage Attribution-ShareAlike 4.0 International Creative Commons License by sa \
Topics Stack Exchange Data Dump \
Contributor Stack Exchange Community
Please see the license information at:
https://archive.org/details/stackexchange
The dataset has been split into following for cleaner formatting.
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_math_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl
- https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl | 842 | [
[
-0.035614013671875,
-0.042999267578125,
0.015655517578125,
0.042205810546875,
0.0006093978881835938,
0.0001481771469116211,
0.0191802978515625,
-0.017181396484375,
0.031463623046875,
0.0633544921875,
-0.0295867919921875,
-0.0272369384765625,
-0.03765869140625,
... |
hfface/poopi | 2021-03-20T20:20:31.000Z | [
"region:us"
] | hfface | null | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.0379638... |
huggingFaceUser02/air21_grp13_inference_results | 2021-06-15T19:15:17.000Z | [
"region:us"
] | huggingFaceUser02 | null | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggingFaceUser02/air21_grp13_tokenized_results | 2021-06-10T11:00:20.000Z | [
"region:us"
] | huggingFaceUser02 | null | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jamescalam/climate-fever-similarity | 2021-12-18T09:24:24.000Z | [
"region:us"
] | jamescalam | null | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jimregan/foinse | 2021-10-06T20:42:52.000Z | [
"region:us"
] | jimregan | Foinse was an Irish-language magazine site.
This script uses a list of articles retrieved from the
Wayback Machine to build a corpus | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nateraw/image-folder | 2021-07-12T03:53:03.000Z | [
"region:us"
] | nateraw | null | null | 0 | 39 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
projecte-aina/ancora-ca-ner | 2023-09-13T12:44:29.000Z | [
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-4.0",
"arxiv:2107.07903",
"region:us"
] | projecte-aina | AnCora Catalan NER.
This is a dataset for Named Eentity Reacognition (NER) from Ancora corpus adapted for
Machine Learning and Language Model evaluation purposes.
Since multiwords (including Named Entites) in the original Ancora corpus are aggregated as
a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona")
we splitted them to align with word-per-line format, and added conventional Begin-Inside-Outside (IOB)
tags to mark and classify Named Entites.
We did not filter out the different categories of NEs from Ancora (weak and strong).
We did 6 minor edits by hand.
AnCora corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB). | 0 | 39 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: ancora-ca-ner
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for AnCora-Ca-NER
## Dataset Description
- **Website:** https://zenodo.org/record/5036651
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Paper:** [AnCora: Multilevel Annotated Corpora for Catalan and Spanish](http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
This is a dataset for Named Entity Recognition (NER) in Catalan. It adapts <a href="http://clic.ub.edu/corpus/">AnCora corpus</a> for Machine Learning and Language Model evaluation purposes.
[AnCora corpus](http://clic.ub.edu/corpus/) is used under [CC-by](https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Fundació B-ORG
Privada I-ORG
Fira I-ORG
de I-ORG
Manresa I-ORG
ha O
fet O
un O
balanç O
de O
l' O
activitat O
del O
Palau B-LOC
Firal I-LOC
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
We took the original train, dev and test splits from the [UD version of the corpus](https://huggingface.co/datasets/universal_dependencies)
- train: 10,630 examples
- validation: 1,429 examples
- test: 1,528 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
[AnCora](http://clic.ub.edu/corpus/) consists of a CatCAalan corpus (AnCora-CA) and a Spanish corpus (AnCora-ES), each of them of 500,000 tokens (some multi-word). The corpora are annotated for linguistic phenomena at different levels.
AnCora corpus is mainly based on newswire texts. For more information, refer to Taulé, M., M.A. Martí, M. Recasens (2009): <a href="http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf">"AnCora: Multilevel Annotated Corpora for Catalan and Spanish”</a>, Proceedings of 6th International Conference on language Resources and Evaluation.
#### Who are the source language producers?
Catalan [AnCora corpus](http://clic.ub.edu/corpus/) is compiled from articles from the following news outlets: <a href="https://www.efe.com">EFE</a>, <a href="https://www.acn.cat">ACN</a>, <a href="https://www.elperiodico.cat/ca/">El Periodico</a>.
### Annotations
#### Annotation process
We adapted the NER labels from [AnCora corpus](http://clic.ub.edu/corpus/) to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from [AnCora corpus](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529299)
### Contributions
[N/A] | 5,439 | [
[
-0.029998779296875,
-0.038360595703125,
0.005901336669921875,
0.04229736328125,
-0.00885009765625,
0.025543212890625,
-0.026519775390625,
-0.041778564453125,
0.0283355712890625,
0.0256805419921875,
-0.020172119140625,
-0.0625,
-0.035552978515625,
0.024291992... | |
westphal-jan/mnli_entailment | 2022-04-19T15:13:12.000Z | [
"region:us"
] | westphal-jan | null | null | 0 | 39 | 2022-04-16T14:06:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
allenai/objaverse | 2023-03-31T11:05:57.000Z | [
"language:en",
"license:odc-by",
"arxiv:2212.08051",
"region:us"
] | allenai | null | null | 248 | 39 | 2022-12-12T19:06:33 | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse
Objaverse is a Massive Dataset with 800K+ Annotated 3D Objects.
More documentation is coming soon. In the meantime, please see our [paper](https://arxiv.org/abs/2212.08051) and [website](https://objaverse.allenai.org/) for additional details.
# License
The use of the dataset as a whole is licensed under the [ODC-By v1.0](https://opendatacommons.org/licenses/by/1-0/) license. Individual objects in Objaverse are all licensed as creative commons distributable objects, and may be under the following licenses:
- [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) - 721K objects
- [CC-BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) - 25K objects
- [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) - 52K objects
- [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) - 16K objects
- [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/) - 3.5K objects
The metadata will provide the license for each object.
# Citation
To cite Objaverse, please use the following BibTeX entry:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
``` | 1,469 | [
[
-0.055023193359375,
-0.06158447265625,
0.042938232421875,
0.0028533935546875,
-0.0077667236328125,
-0.0268402099609375,
0.0085906982421875,
-0.055450439453125,
0.01004791259765625,
0.07452392578125,
-0.0294189453125,
-0.03460693359375,
-0.036224365234375,
0.... |
dbarbedillo/SMS_Spam_Multilingual_Collection_Dataset | 2023-01-13T03:07:17.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"language:es",
"language:hi",
"language:fr",
"language:de",
"language:ar",
"language:bn",
"language:ru",
"language:pt",
"language:id",
"language:ur",
"language:ja",
"language:pa",
"langua... | dbarbedillo | null | null | 6 | 39 | 2023-01-13T02:13:03 | ---
license: gpl
task_categories:
- text-classification
language:
- en
- zh
- es
- hi
- fr
- de
- ar
- bn
- ru
- pt
- id
- ur
- ja
- pa
- jv
- tr
- ko
- mr
- uk
- sv
- 'no'
size_categories:
- 1K<n<10K
---
SMS Spam Multilingual Collection Dataset
Collection of Multilingual SMS messages tagged as spam or legitimate
About Dataset
Context
The SMS Spam Collection is a set of SMS-tagged messages that have been collected for SMS Spam research. It originally contained one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam and later Machine Translated into Hindi, German and French.
The text has been further translated into Spanish, Chinese, Arabic, Bengali, Russian, Portuguese, Indonesian, Urdu, Japanese, Punjabi, Javanese, Turkish, Korean, Marathi, Ukrainian, Swedish, and Norwegian using M2M100_418M a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation created by Facebook AI.
Content
The augmented Dataset contains multilingual text and corresponding labels.
ham- non-spam text
spam- spam text
Acknowledgments
The original English text was taken from- https://www.kaggle.com/uciml/sms-spam-collection-dataset
Hindi, German and French taken from - https://www.kaggle.com/datasets/rajnathpatel/multilingual-spam-data | 1,324 | [
[
-0.034942626953125,
-0.058685302734375,
-0.00629425048828125,
0.035980224609375,
-0.02203369140625,
0.0018644332885742188,
-0.02496337890625,
-0.043670654296875,
0.019195556640625,
0.066650390625,
-0.04248046875,
-0.046722412109375,
-0.056976318359375,
0.039... |
Cohere/miracl-zh-queries-22-12 | 2023-02-06T11:55:33.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 21 | 39 | 2023-01-31T13:38:51 | ---
annotations_creators:
- expert-generated
language:
- zh
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| 6,103 | [
[
-0.04547119140625,
-0.056427001953125,
0.0259857177734375,
0.01380157470703125,
-0.006866455078125,
-0.006267547607421875,
-0.022308349609375,
-0.03460693359375,
0.039398193359375,
0.0152130126953125,
-0.04345703125,
-0.07489013671875,
-0.0474853515625,
0.02... |
LabelStudio/IMDB_Sample_100 | 2023-02-22T20:34:17.000Z | [
"size_categories:n<1K",
"Tutorial",
"region:us"
] | LabelStudio | null | null | 2 | 39 | 2023-02-17T20:32:25 | ---
tags:
- Tutorial
size_categories:
- n<1K
---
## Zero to One: Label Studio Tutorial Dataset
This dataset is used in the [Label Studio Zero to One Tutorial](https://hubs.ly/Q01CNlyy0). This dataset was originally provided by [Andrew Maas](https://ai.stanford.edu/~amaas/)([ref](https://ai.stanford.edu/~amaas/papers/wvSent_acl2011.bib)). This is an open and well-known dataset. The original dataset did have over 100,000 reviews.
### Parsing down 100,000 reviews to 100 reviews
To parse this dataset down to 100 reviews, (Chris Hoge)[https://huggingface.co/hogepodge] and myself((Erin Mikail Staples)[https://huggingface.co/erinmikail]) took the following steps.
It started by (writing a script)[https://s3.amazonaws.com/labelstud.io/datasets/IMDB_collect.py] that walked the directory structure to capture the data and metadata as rows of data. The data was written in randomized batches with rows corresponding to:
- 0 - 25,000: Labeled training data, with positive and negative sentiment mixed.
- 25,001 - 75000: Unlabeled training data.
- 75001 - 100,000: Labeled testing data, with positive and negative sentiment mixed.
These batches were also written out as separate files for convenience. Finally, the first 100 rows of each batch were written out as separate files to support faster loading for a streamlined learning experience.
Our thanks to Andrew Maas for having provided this free data set from their research.
## Did you try your hand at this tutorial?
We'd love to hear you share your results and how it worked out for you!
Did you build something else with the data?
Let us know! Join us in the (Label Studio Slack Community)[https://hubs.ly/Q01CNprb0] or drop us an (email)[mailto:community@labelstud.io]
## Enjoy what we're working on?
Drop us a star on (GitHub!)[https://hubs.ly/Q01CNp4W0]
| 1,829 | [
[
-0.067626953125,
-0.033477783203125,
0.031158447265625,
0.00603485107421875,
-0.026702880859375,
-0.007656097412109375,
0.005706787109375,
0.00826263427734375,
0.0556640625,
0.0556640625,
-0.0638427734375,
-0.0701904296875,
-0.04083251953125,
0.0064735412597... |
ruanchaves/faquad-nli | 2023-04-13T18:26:38.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|wikipedia",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | ruanchaves | 2 | 39 | 2023-03-13T14:08:59 | ---
pretty_name: FaQuAD-NLI
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
# paperswithcode_id: faquad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for FaQuAD-NLI
## Dataset Description
- **Homepage:** https://github.com/liafacom/faquad
- **Repository:** https://github.com/liafacom/faquad
- **Paper:** https://ieeexplore.ieee.org/document/8923668/
<!-- - **Leaderboard:** -->
- **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com>
### Dataset Summary
FaQuAD is a Portuguese reading comprehension dataset that follows the format of the Stanford Question Answering Dataset (SQuAD). It is a pioneer Portuguese reading comprehension dataset using the challenging format of SQuAD. The dataset aims to address the problem of abundant questions sent by academics whose answers are found in available institutional documents in the Brazilian higher education system. It consists of 900 questions about 249 reading passages taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to the Brazilian higher education system.
FaQuAD-NLI is a modified version of the [FaQuAD dataset](https://huggingface.co/datasets/eraldoluis/faquad) that repurposes the question answering task as a textual entailment task between a question and its possible answers.
### Supported Tasks and Leaderboards
- `question_answering`: The dataset can be used to train a model for question-answering tasks in the domain of Brazilian higher education institutions.
- `textual_entailment`: FaQuAD-NLI can be used to train a model for textual entailment tasks, where answers in Q&A pairs are classified as either suitable or unsuitable.
### Languages
This dataset is in Brazilian Portuguese.
## Dataset Structure
### Data Fields
- `document_index`: an integer representing the index of the document.
- `document_title`: a string containing the title of the document.
- `paragraph_index`: an integer representing the index of the paragraph within the document.
- `question`: a string containing the question related to the paragraph.
- `answer`: a string containing the answer related to the question.
- `label`: an integer (0 or 1) representing if the answer is suitable (1) or unsuitable (0) for the question.
### Data Splits
The dataset is split into three subsets: train, validation, and test.
The splits were made carefully to avoid question and answer pairs belonging to the same document appearing in more than one split.
| | Train | Validation | Test |
|------------|-------|------------|------|
| Instances | 3128 | 731 | 650 |
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | 3,248 | [
[
-0.0335693359375,
-0.07440185546875,
0.00525665283203125,
0.0269317626953125,
0.007061004638671875,
-0.0133056640625,
0.01629638671875,
0.00041222572326660156,
0.01073455810546875,
0.04461669921875,
-0.051910400390625,
-0.041748046875,
-0.01995849609375,
0.0... | ||
katarinagresova/Genomic_Benchmarks_human_nontata_promoters | 2023-03-13T19:33:47.000Z | [
"region:us"
] | katarinagresova | null | null | 0 | 39 | 2023-03-13T19:06:27 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 7126511
num_examples: 27097
- name: test
num_bytes: 2375942
num_examples: 9034
download_size: 0
dataset_size: 9502453
---
# Dataset Card for "Genomic_Benchmarks_human_nontata_promoters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 472 | [
[
-0.044281005859375,
0.0020389556884765625,
-0.00601959228515625,
0.00902557373046875,
-0.005313873291015625,
0.00859832763671875,
0.00792694091796875,
0.0029201507568359375,
0.05645751953125,
0.015167236328125,
-0.047271728515625,
-0.060455322265625,
-0.03323364... |
slvnwhrl/tenkgnad-clustering-s2s | 2023-04-24T12:04:03.000Z | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-sa-4.0",
"embeddings",
"clustering",
"benchmark",
"arxiv:2210.07316",
"region:us"
] | slvnwhrl | null | null | 0 | 39 | 2023-04-21T14:03:22 | ---
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering) ([Paper](https://arxiv.org/abs/2210.07316)).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. | 833 | [
[
-0.03485107421875,
-0.05859375,
0.040130615234375,
0.03936767578125,
-0.0445556640625,
0.007167816162109375,
-0.019927978515625,
-0.007053375244140625,
0.0282135009765625,
0.0130767822265625,
-0.0219879150390625,
-0.08245849609375,
-0.061004638671875,
0.0078... |
slvnwhrl/tenkgnad-clustering-p2p | 2023-04-24T12:06:18.000Z | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-sa-4.0",
"embeddings",
"clustering",
"benchmark",
"arxiv:2210.07316",
"region:us"
] | slvnwhrl | null | null | 0 | 39 | 2023-04-21T14:07:23 | ---
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'275 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering) ([Paper](https://arxiv.org/abs/2210.07316)).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. | 833 | [
[
-0.03466796875,
-0.05853271484375,
0.040374755859375,
0.03936767578125,
-0.044219970703125,
0.007450103759765625,
-0.0201416015625,
-0.00707244873046875,
0.027923583984375,
0.012786865234375,
-0.0218048095703125,
-0.0823974609375,
-0.06097412109375,
0.008041... |
Celestinian/minimal-wikipedia-corpus-raw | 2023-04-29T17:05:03.000Z | [
"license:mit",
"region:us"
] | Celestinian | null | null | 1 | 39 | 2023-04-27T21:22:23 | ---
license: mit
datasetsviewer:
not_supported: true
---
A dataset of Wikipedia's most popular articles, an extensive collection of unprocessed text data covering a diverse range of topics including history, science, critical thinking, mathematics, and more.
This dataset aims to facilitate the pretraining of large language models by providing a vast corpus of informative content. This dataset is an excellent resource for researchers and developers looking to pretrain large language models.
Its unprocessed format and diverse range of topics make it ideal for pretraining custom models that can understand and generate natural language text. | 649 | [
[
-0.055938720703125,
-0.0474853515625,
0.0101776123046875,
-0.00597381591796875,
-0.023681640625,
-0.007171630859375,
-0.03521728515625,
-0.003582000732421875,
0.0263824462890625,
0.039276123046875,
-0.038360595703125,
-0.03240966796875,
-0.0257415771484375,
... |
ChilleD/LastLetterConcat | 2023-05-11T13:43:26.000Z | [
"region:us"
] | ChilleD | null | null | 0 | 39 | 2023-05-11T13:42:51 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
jxu124/cc_sbu_align | 2023-05-20T18:52:57.000Z | [
"region:us"
] | jxu124 | null | null | 0 | 39 | 2023-05-20T18:37:49 | ---
dataset_info:
features:
- name: caption
dtype: string
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 1561212
num_examples: 3439
download_size: 721956
dataset_size: 1561212
---
# Dataset Card for "cc_sbu_align"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.032135009765625,
-0.0239715576171875,
0.01030731201171875,
0.00397491455078125,
-0.0166778564453125,
0.0010156631469726562,
0.0158843994140625,
0.00384521484375,
0.052520751953125,
0.027587890625,
-0.0723876953125,
-0.047210693359375,
-0.0254364013671875,
... |
xbgoose/ravdess | 2023-05-21T22:35:11.000Z | [
"region:us"
] | xbgoose | null | null | 0 | 39 | 2023-05-21T22:33:45 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: modality
dtype: string
- name: vocal_channel
dtype: string
- name: emotion
dtype: string
- name: emotional_intensity
dtype: string
- name: statement
dtype: string
- name: repetition
dtype: string
- name: actor
dtype: int64
- name: gender
dtype: string
splits:
- name: train
num_bytes: 595474115.04
num_examples: 1440
download_size: 324920159
dataset_size: 595474115.04
---
# Dataset Card for "ravdess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 669 | [
[
-0.03466796875,
-0.0236968994140625,
0.004180908203125,
0.0196685791015625,
-0.00983428955078125,
0.0026149749755859375,
0.0257568359375,
-0.0136260986328125,
0.06121826171875,
0.039825439453125,
-0.061859130859375,
-0.05511474609375,
-0.034454345703125,
-0.... |
Abzu/dolly_hhrlhf_wizard | 2023-05-26T08:59:42.000Z | [
"region:us"
] | Abzu | null | null | 0 | 39 | 2023-05-25T09:15:25 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 108006083.60236111
num_examples: 84468
- name: test
num_bytes: 12001528.397638885
num_examples: 9386
download_size: 67011577
dataset_size: 120007612.0
---
# Dataset Card for "dolly_hhrlhf_wizard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.0285491943359375,
-0.0275726318359375,
-0.0134124755859375,
0.01540374755859375,
0.00812530517578125,
-0.00351715087890625,
0.0345458984375,
-0.000009655952453613281,
0.05828857421875,
0.053375244140625,
-0.05828857421875,
-0.047027587890625,
-0.0440673828125... |
vgaraujov/wmt13 | 2023-07-04T08:17:47.000Z | [
"region:us"
] | vgaraujov | null | @InProceedings{bojar-EtAl:2013:WMT,
author = {Bojar, Ond\v{r}ej and Buck, Christian and Callison-Burch, Chris and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Monz, Christof and Post, Matt and Soricut, Radu and Specia, Lucia},
title = {Findings of the 2013 {Workshop on Statistical Machine Translation}},
booktitle = {Proceedings of the Eighth Workshop on Statistical Machine Translation},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {1--44},
url = {http://www.aclweb.org/anthology/W13-2201}
} | 0 | 39 | 2023-05-31T14:10:00 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
sam-mosaic/chat-v2 | 2023-07-18T00:23:25.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | 2 | 39 | 2023-06-04T10:09:03 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1053541716.4621352
num_examples: 306305
- name: test
num_bytes: 20265459.694286585
num_examples: 5339
download_size: 505718158
dataset_size: 1073807176.1564217
---
# Dataset Card for "chat_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.02313232421875,
-0.0299835205078125,
0.0013399124145507812,
0.0249481201171875,
-0.01479339599609375,
-0.006786346435546875,
0.0187225341796875,
-0.0251617431640625,
0.05682373046875,
0.038330078125,
-0.06256103515625,
-0.0391845703125,
-0.04290771484375,
... |
clarin-knext/trec-covid-pl-qrels | 2023-06-07T08:11:44.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 39 | 2023-06-06T22:38:14 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.0153961181640625,
-0.0628662109375,
0.03546142578125,
0.0164031982421875,
-0.0221710205078125,
-0.0103607177734375,
-0.01160430908203125,
-0.034515380859375,
-0.0013275146484375,
0.0286102294921875,
-0.03826904296875,
-0.048126220703125,
-0.0290069580078125,
... |
KaiLv/UDR_Python | 2023-06-21T12:45:54.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 39 | 2023-06-21T12:44:42 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 153748508
num_examples: 250818
- name: validation
num_bytes: 8561595
num_examples: 13841
- name: test
num_bytes: 9299006
num_examples: 14840
- name: debug
num_bytes: 61463442
num_examples: 100000
download_size: 107210496
dataset_size: 233072551
---
# Dataset Card for "UDR_Python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 703 | [
[
-0.031005859375,
-0.017364501953125,
-0.0012197494506835938,
0.017974853515625,
-0.00249481201171875,
-0.00029277801513671875,
0.018707275390625,
-0.0022678375244140625,
0.03631591796875,
0.022430419921875,
-0.040557861328125,
-0.040191650390625,
-0.030822753906... |
numind/C4-generic-sentiment_analysis-v1 | 2023-06-30T14:32:43.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_ids:sentiment-analysis",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | numind | null | null | 3 | 39 | 2023-06-28T08:06:02 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
task_ids:
- sentiment-analysis
language:
- en
size_categories:
- 100K<n<1M
---
250k (40k after balancing the classes) sentences from C4 dataset (clean version of Common Crawl) with sentiment annotation (Positive, Negative, Neutral) automatically annotated with GPT3.5. Can be used to train generic (no domain) sentiment analysis model.
Labels:
0: Positive
1:Negative
2:Neutral
| 468 | [
[
-0.051605224609375,
-0.0166168212890625,
0.0200042724609375,
0.0133819580078125,
-0.03265380859375,
-0.0008788108825683594,
0.00472259521484375,
-0.0400390625,
0.0201873779296875,
0.0291595458984375,
-0.04290771484375,
-0.05938720703125,
-0.032989501953125,
... |
alan-23/HealthAssistant115 | 2023-07-05T11:36:11.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"medical",
"biology",
"region:us"
] | alan-23 | null | null | 4 | 39 | 2023-07-05T11:31:24 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- medical
- biology
pretty_name: Health Assistant
size_categories:
- n<1K
---
This dataset created for experiment to fine-tune LLM model. Questions created by ChatGPT as answer to "Give me questions a person can ask about for healthy life".
Answers created by OpenAI API. | 355 | [
[
-0.0243377685546875,
-0.075439453125,
0.0194244384765625,
-0.01110076904296875,
-0.00907135009765625,
-0.029754638671875,
0.006877899169921875,
-0.03277587890625,
0.0310211181640625,
0.062225341796875,
-0.076904296875,
-0.00875091552734375,
-0.006381988525390625... |
sngsfydy/aptos_train | 2023-07-19T19:52:54.000Z | [
"region:us"
] | sngsfydy | null | null | 0 | 39 | 2023-07-19T18:43:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 7204351279.337521
num_examples: 2929
download_size: 7192333107
dataset_size: 7204351279.337521
---
# Dataset Card for "aptos_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.0421142578125,
0.00005042552947998047,
0.0101470947265625,
0.00896453857421875,
-0.033050537109375,
-0.005069732666015625,
0.0289764404296875,
-0.0081939697265625,
0.069580078125,
0.03564453125,
-0.04339599609375,
-0.0496826171875,
-0.050628662109375,
0.0... |
satpalsr/indicCorpv2 | 2023-07-31T08:34:06.000Z | [
"task_categories:text-generation",
"language:as",
"language:brx",
"language:bn",
"language:doi",
"language:en",
"language:gom",
"language:gu",
"language:hi",
"language:kha",
"language:kn",
"language:ks",
"language:mai",
"language:ml",
"language:mni",
"language:mr",
"language:ne",
"... | satpalsr | IndicCORPV2 is the largest collection of texts for Indic langauges consisting of 20.9 Billion tokens of which 14.4B tokens correspond to 23 Indic languages and 6.B tokens of Indian English content curated from Indian websites. | @article{Doddapaneni2022towards,
title={Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages},
author={Sumanth Doddapaneni and Rahul Aralikatte and Gowtham Ramesh and Shreyansh Goyal and Mitesh M. Khapra and Anoop Kunchukuttan and Pratyush Kumar},
journal={ArXiv},
year={2022},
volume={abs/2212.05409}
} | 1 | 39 | 2023-07-31T06:50:51 | ---
license: cc0-1.0
language:
- as
- brx
- bn
- doi
- en
- gom
- gu
- hi
- kha
- kn
- ks
- mai
- ml
- mni
- mr
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
task_categories:
- text-generation
---
# IndicCorpv2
[Github](https://github.com/AI4Bharat/IndicBERT) | [Paper](https://arxiv.org/abs/2212.05409)
Languages: 24
The largest collection of texts for Indic languages consisting of 20.9 billion tokens of which 14.4B tokens correspond to 23 Indic languages and 6.5B tokens of Indian English content curated from Indian websites.
<img src="https://huggingface.co/datasets/satpalsr/indicCorpv2/resolve/main/languages.png">
## Citation
```
@article{Doddapaneni2022towards,
title={Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages},
author={Sumanth Doddapaneni and Rahul Aralikatte and Gowtham Ramesh and Shreyansh Goyal and Mitesh M. Khapra and Anoop Kunchukuttan and Pratyush Kumar},
journal={ArXiv},
year={2022},
volume={abs/2212.05409}
}
``` | 1,024 | [
[
-0.0278472900390625,
-0.0240631103515625,
-0.00916290283203125,
0.037506103515625,
-0.0245361328125,
0.04327392578125,
-0.047210693359375,
-0.040496826171875,
0.0232086181640625,
0.0257568359375,
-0.024688720703125,
-0.0364990234375,
-0.031768798828125,
0.03... |
bigheiniuJ/Natural-Instruction | 2023-08-02T04:52:07.000Z | [
"region:us"
] | bigheiniuJ | null | null | 0 | 39 | 2023-08-01T02:47:28 | ---
dataset_info:
features:
- name: id
dtype: string
- name: task_name
dtype: string
- name: definition
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: pos_examples
list:
- name: explanation
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 163080536
num_examples: 75448
- name: test
num_bytes: 23754694
num_examples: 11810
download_size: 0
dataset_size: 186835230
---
# Dataset Card for "Natural-Instruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.038909912109375,
-0.050323486328125,
0.0032901763916015625,
0.01104736328125,
-0.0111541748046875,
-0.002399444580078125,
-0.005527496337890625,
-0.01456451416015625,
0.043792724609375,
0.042633056640625,
-0.06689453125,
-0.0537109375,
-0.021514892578125,
... |
Antreas/TALI | 2023-08-20T15:50:08.000Z | [
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"video",
"audio",
"text",
"image",
"tetramodal",
"multimodal",
"youtube",
"wikipedia",
"region:us"
] | Antreas | null | null | 1 | 39 | 2023-08-16T22:59:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: image_url
dtype: string
- name: item_idx
dtype: int64
- name: wit_features
struct:
- name: attribution_passes_lang_id
sequence: bool
- name: caption_alt_text_description
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_title_and_reference_description
sequence: string
- name: context_page_description
sequence: string
- name: context_section_description
sequence: string
- name: hierarchical_section_title
sequence: string
- name: is_main_image
sequence: bool
- name: language
sequence: string
- name: page_changed_recently
sequence: bool
- name: page_title
sequence: string
- name: page_url
sequence: string
- name: section_title
sequence: string
- name: wit_idx
dtype: int64
- name: youtube_title_text
dtype: string
- name: youtube_description_text
dtype: string
- name: youtube_video_content
dtype: binary
- name: youtube_video_starting_time
dtype: string
- name: youtube_subtitle_text
dtype: string
- name: youtube_video_size
dtype: int64
- name: youtube_video_file_path
dtype: string
splits:
- name: train
num_bytes: 1902638101655.625
num_examples: 1052915
- name: val
num_bytes: 104485442867.25
num_examples: 57958
- name: test
num_bytes: 111107332347.375
num_examples: 61389
download_size: 2058391040534
dataset_size: 2118230876870.25
license: cc-by-4.0
task_categories:
- zero-shot-classification
tags:
- video
- audio
- text
- image
- tetramodal
- multimodal
- youtube
- wikipedia
pretty_name: TALI
size_categories:
- 1M<n<10M
---
# Dataset Card for "TALI"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
### Dataset Variants
The TALI dataset comes in three variants that differ in the training set size:
- TALI-small: Contains about 1.3 million 30-second video clips, aligned with 120K WiT entries.
- TALI-base: Contains about 6.5 million 30-second video clips, aligned with 120K WiT entries.
- TALI-big: Contains about 13 million 30-second video clips, aligned with 120K WiT entries.
The validation and test sets remain consistent across all three variants at about 80K Videos aligned to 8K wikipedia entries (10 subclips for each Wikipedia entry) each.
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 5,926 | [
[
-0.044647216796875,
-0.048614501953125,
0.00649261474609375,
-0.0011081695556640625,
-0.0165863037109375,
0.003681182861328125,
-0.0285186767578125,
-0.013092041015625,
0.0198516845703125,
-0.01297760009765625,
-0.0596923828125,
-0.04937744140625,
-0.05654907226... |
corbt/unlabeled-recipes | 2023-08-23T23:43:57.000Z | [
"region:us"
] | corbt | null | null | 0 | 39 | 2023-08-23T23:43:56 | ---
dataset_info:
features:
- name: recipe
dtype: string
splits:
- name: train
num_bytes: 2793853
num_examples: 5000
download_size: 1465640
dataset_size: 2793853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "unlabeled-recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.0252227783203125,
-0.0130462646484375,
0.01230621337890625,
0.0159912109375,
-0.00507354736328125,
0.011016845703125,
0.0011796951293945312,
-0.004535675048828125,
0.06561279296875,
0.06707763671875,
-0.058563232421875,
-0.062042236328125,
-0.0462646484375,
... |
codefuse-ai/Evol-instruction-66k | 2023-10-23T08:02:12.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | codefuse-ai | null | null | 48 | 39 | 2023-09-07T07:48:34 | ---
license: cc-by-nc-sa-4.0
viewer: false
---
# Dataset Card for CodeFuse-Evol-instruction-66k
<div align='center'>

[[中文]](#chinese) [[English]](#english)
</div>
<a id="english"></a>
## Dataset Description
Evol-instruction-66k data is based on the method mentioned in the paper "WizardCoder: Empowering Code Large Language Models with Evol-Instruct". It enhances the fine-tuning effect of pre-trained code large models by adding complex code instructions.
This data is processed based on an open-source dataset, which can be found at [Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1). The processing includes low-quality filtering, filtering similar data through HumanEval evaluation, etc. After filtering from the original 80k data, we obtain 66k high-quality training and fine-tuning data.
## Creation
<strong>Filtering low-quality data</strong><br>
(1) Filter data with instruction length less than 10 words or greater than 1000 words;<br>
(2) Filter data with output length less than 50 words or greater than 2000 words;<br>
(3) Filter data with no markdown format or multiple markdown formats in the output;<br>
(4) Filter data with less than 3 lines or more than 100 lines of code in the markdown format;<br>
(5) Filter data with a description preceding the markdown format containing more than 200 words.<br>
<strong>Filtering data similar to humaneval</strong><br>
(1) Filter data containing any function name from humaneval;<br>
(2) Using NLTK to remove stop words and punctuation from the docstring of humaneval, obtain the core words such as "sort array prime", etc. Filter data containing more than 40% of the core words from humaneval.<br>
## Structure
````
Instruction:
Execute a multistage computational task consisting of organizing the provided list of numerals in an ascending sequence and then determining the middle, or median, figure from the resultant ordered sequence.
data = [12, 30, 4, 18, 24]
Output:
Here's a Python solution:
```python
data = [12, 30, 4, 18, 24]
# organizing the list in ascending sequence
data.sort()
# Finding the middle figure from the sorted sequence
n = len(data)
if n % 2 == 0:
median = (data[n//2 - 1] + data[n//2]) / 2
else:
median = data[n//2]
print(f\"Sorted Data: {data}\")
print(f\"Median: {median}\")
```
First, we sort the list in ascending order using the `sort()` function. Then, we find the median. If the length of the list is even, the median is the average of the two middle numbers. If the length is odd, the median is the middle number. In this specific case, because the list length is odd (5), the middle value (i.e., the median) is the 3rd value in the sorted list.
````
<a id="chinese"></a>
## 简介
Evol-instruction-66k数据是根据论文《WizardCoder: Empowering Code Large Language
Models with Evol-Instruct》中提到的方法,通过添加复杂的代码指令来增强预训练代码大模型的微调效果。
该数据是在开源数据集[Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1)基础上对数据进行了一系列处理,包括低质量过滤、HumanEval评测相似数据过滤等,从原始80k数据筛选后得到66k高质量训练微调数据。
## 数据生产过程
<strong>过滤低质量数据</strong><br>
(1) 过滤instruction长度小于10个单词或者大于1000个单词的数据;<br>
(2) 过滤output长度小于50个单词或者大于2000个单词的数据;<br>
(3) 过滤output中无markdown格式或者有多个markdown的数据;<br>
(4) 过滤markdown格式中代码少于3行或者大于100行的数据;<br>
(5) 过滤markdown格式前面描述单词大于200个单词的数据。<br>
<strong>过滤与humaneval相似的数据</strong><br>
(1) 过滤包含humaneval中任意函数名的数据;<br>
(2) 采用NLTK去除humaneval的docstring中停用词、标点符号后,得到核心词,比如“sort array prime”等,<br>
过滤包含了humaneval超过40%核心词的数据。<br>
## 数据结构
````
Instruction:
Execute a multistage computational task consisting of organizing the provided list of numerals in an ascending sequence and then determining the middle, or median, figure from the resultant ordered sequence.
data = [12, 30, 4, 18, 24]
Output:
Here's a Python solution:
```python
data = [12, 30, 4, 18, 24]
# organizing the list in ascending sequence
data.sort()
# Finding the middle figure from the sorted sequence
n = len(data)
if n % 2 == 0:
median = (data[n//2 - 1] + data[n//2]) / 2
else:
median = data[n//2]
print(f\"Sorted Data: {data}\")
print(f\"Median: {median}\")
```
First, we sort the list in ascending order using the `sort()` function. Then, we find the median. If the length of the list is even, the median is the average of the two middle numbers. If the length is odd, the median is the middle number. In this specific case, because the list length is odd (5), the middle value (i.e., the median) is the 3rd value in the sorted list.
```` | 4,767 | [
[
-0.04205322265625,
-0.05035400390625,
0.006229400634765625,
0.008544921875,
-0.03240966796875,
-0.022918701171875,
-0.0220794677734375,
-0.0216217041015625,
0.00994873046875,
0.03594970703125,
-0.027496337890625,
-0.059234619140625,
-0.046478271484375,
0.032... |
sordonia/wiki_mmlu_from_valid_all | 2023-09-13T18:25:50.000Z | [
"region:us"
] | sordonia | null | null | 0 | 39 | 2023-09-13T18:25:09 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: docno
dtype: int64
- name: score
dtype: float64
- name: dfq
dtype: int64
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1394179124
num_examples: 136591
download_size: 767951516
dataset_size: 1394179124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wiki_mmlu_from_valid_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 694 | [
[
-0.0276641845703125,
-0.034332275390625,
0.0192108154296875,
0.00592803955078125,
-0.01303863525390625,
-0.017059326171875,
0.00792694091796875,
-0.0004551410675048828,
0.05889892578125,
0.02740478515625,
-0.05975341796875,
-0.0482177734375,
-0.039825439453125,
... |
jherng/rsna-2023-abdominal-trauma-detection | 2023-10-10T06:56:40.000Z | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | jherng | This dataset is the preprocessed version of the dataset from RSNA 2023 Abdominal Trauma Detection Kaggle Competition.
It is tailored for segmentation and classification tasks. It contains 3 different configs as described below:
- segmentation: 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, and its relevant metadata (e.g., patient_id, series_id, incomplete_organ, aortic_hu, pixel_representation, bits_allocated, bits_stored)
- classification: 4711 instances where each instance includes a CT scan in NIfTI format, target labels (e.g., extravasation, bowel, kidney, liver, spleen, any_injury), and its relevant metadata (e.g., patient_id, series_id, incomplete_organ, aortic_hu, pixel_representation, bits_allocated, bits_stored)
- classification-with-mask: 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, target labels (e.g., extravasation, bowel, kidney, liver, spleen, any_injury), and its relevant metadata (e.g., patient_id, series_id, incomplete_organ, aortic_hu, pixel_representation, bits_allocated, bits_stored)
All CT scans and segmentation masks had already been resampled with voxel spacing (2.0, 2.0, 3.0) and thus its reduced file size. | @InProceedings{huggingface:dataset,
title = {RSNA 2023 Abdominal Trauma Detection Dataset},
author={Hong Jia Herng},
year={2023}
}
@misc{rsna-2023-abdominal-trauma-detection,
author = {Errol Colak, Hui-Ming Lin, Robyn Ball, Melissa Davis, Adam Flanders, Sabeena Jalal, Kirti Magudia, Brett Marinelli, Savvas Nicolaou, Luciano Prevedello, Jeff Rudie, George Shih, Maryam Vazirabad, John Mongan},
title = {RSNA 2023 Abdominal Trauma Detection},
publisher = {Kaggle},
year = {2023},
url = {https://kaggle.com/competitions/rsna-2023-abdominal-trauma-detection}
} | 0 | 39 | 2023-09-19T10:10:47 | ---
license: mit
dataset_info:
- config_name: classification
features:
- name: img_path
dtype: string
- name: bowel
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: extravasation
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: kidney
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: liver
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: spleen
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: any_injury
dtype: bool
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 802231
num_examples: 4239
- name: test
num_bytes: 89326
num_examples: 472
download_size: 96729254048
dataset_size: 891557
- config_name: classification-with-mask
features:
- name: img_path
dtype: string
- name: seg_path
dtype: string
- name: bowel
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: extravasation
dtype:
class_label:
names:
"0": healthy
"1": injury
- name: kidney
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: liver
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: spleen
dtype:
class_label:
names:
"0": healthy
"1": low
"2": high
- name: any_injury
dtype: bool
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 58138
num_examples: 185
- name: test
num_bytes: 6600
num_examples: 21
download_size: 4196738529
dataset_size: 64738
- config_name: segmentation
features:
- name: img_path
dtype: string
- name: seg_path
dtype: string
- name: metadata
struct:
- name: series_id
dtype: int32
- name: patient_id
dtype: int32
- name: incomplete_organ
dtype: bool
- name: aortic_hu
dtype: float32
- name: pixel_representation
dtype: int32
- name: bits_allocated
dtype: int32
- name: bits_stored
dtype: int32
splits:
- name: train
num_bytes: 50714
num_examples: 185
- name: test
num_bytes: 5757
num_examples: 21
download_size: 4196631843
dataset_size: 56471
task_categories:
- image-classification
- image-segmentation
pretty_name: RSNA 2023 Abdominal Trauma Detection (Preprocessed)
size_categories:
- 1K<n<10K
---
# Dataset Card for RSNA 2023 Abdominal Trauma Detection (Preprocessed)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection](https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection)
- **Source:** [https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data)
### Dataset Summary
This dataset is the preprocessed version of the dataset from [RSNA 2023 Abdominal Trauma Detection Kaggle Competition](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data).
It is tailored for segmentation and classification tasks. It contains 3 different configs as described below:
- **classification**:
- 4711 instances where each instance includes a CT scan in NIfTI format, target labels, and its relevant metadata.
- **segmentation**:
- 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, and its relevant metadata.
- **classification-with-mask**:
- 206 instances where each instance includes a CT scan in NIfTI format, a segmentation mask in NIfTI format, target labels, and its relevant metadata.
All CT scans and segmentation masks had already been resampled with voxel spacing (2.0, 2.0, 3.0) and thus its reduced file size.
### Usage
```python
from datasets import load_dataset
# Classification dataset
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", streaming=True) # "classification" is the default configuration
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=False) # download dataset and cache locally (~90.09 GiB)
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", streaming=True, test_size=0.05, random_state=42) # specify split size for train-test split
# Classification dataset with segmentation masks
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=False) # download dataset and cache locally (~3.91 GiB)
rsna_clsmask_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification-with-mask", streaming=False, test_size=0.05, random_state=42) # specify split size for train-test split
# Segmentation dataset
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=True) # download dataset on-demand and in-memory (no caching)
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=False) # download dataset and cache locally (~3.91 GiB)
rsna_seg_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "segmentation", streaming=True, test_size=0.1, random_state=42) # specify split size for train-test split
# Get the dataset splits
train_rsna_cls_ds = rsna_cls_ds["train"]; test_rsna_cls_ds = rsna_cls_ds["test"]
train_rsna_clsmask_ds = rsna_clsmask_ds["train"]; test_rsna_clsmask_ds = rsna_clsmask_ds["test"]
train_rsna_seg_ds = rsna_seg_ds["train"]; test_rsna_seg_ds = rsna_seg_ds["test"]
# Tip: Download speed up with multiprocessing
rsna_cls_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", streaming=False, num_proc=8) # num_proc: num of cpu core used for loading the dataset
```
## Dataset Structure
### Data Instances
#### Configuration 1: classification
- **Size of downloaded dataset files:** 90.09 GiB
An example of an instance in the 'classification' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/25899/21872.nii.gz",
"bowel": 0,
"extravasation": 0,
"kidney": 0,
"liver": 0,
"spleen": 0,
"any_injury": false,
"metadata": {
"series_id": 21872,
"patient_id": 25899,
"incomplete_organ": false,
"aortic_hu": 113.0,
"pixel_representation": 0,
"bits_allocated": 16,
"bits_stored": 12
}
}
```
#### Configuration 2: segmentation
- **Size of downloaded dataset files:** 3.91 GiB
An example of an instance in the 'segmentation' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/4791/4622.nii.gz",
"seg_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/segmentations/4622.nii.gz",
"metadata": {
"series_id": 4622,
"patient_id": 4791,
"incomplete_organ": false,
"aortic_hu": 223.0,
"pixel_representation": 1,
"bits_allocated": 16,
"bits_stored": 16
}
}
```
#### Configuration 3: classification-with-mask
- **Size of downloaded dataset files:** 3.91 GiB
An example of an instance in the 'classification-with-mask' configuration looks as follows:
```json
{
"img_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/train_images/4791/4622.nii.gz",
"seg_path": "https://huggingface.co/datasets/jherng/rsna-2023-abdominal-trauma-detection/resolve/main/segmentations/4622.nii.gz",
"bowel": 0,
"extravasation": 0,
"kidney": 0,
"liver": 1,
"spleen": 1,
"any_injury": true,
"metadata": {
"series_id": 4622,
"patient_id": 4791,
"incomplete_organ": false,
"aortic_hu": 223.0,
"pixel_representation": 1,
"bits_allocated": 16,
"bits_stored": 16
}
}
```
### Data Fields
The data fields for all configurations are as follows:
- `img_path`: a `string` feature representing the path to the CT scan in NIfTI format.
- `seg_path`: a `string` feature representing the path to the segmentation mask in NIfTI format (only for 'segmentation' and 'classification-with-mask' configurations).
- `bowel`, `extravasation`, `kidney`, `liver`, `spleen`: Class label features indicating the condition of respective organs.
- `any_injury`: a `bool` feature indicating the presence of any injury.
- `metadata`: a dictionary feature containing metadata information with the following fields:
- `series_id`: an `int32` feature.
- `patient_id`: an `int32` feature.
- `incomplete_organ`: a `bool` feature.
- `aortic_hu`: a `float32` feature.
- `pixel_representation`: an `int32` feature.
- `bits_allocated`: an `int32` feature.
- `bits_stored`: an `int32` feature.
### Data Splits
Default split:
- 0.9:0.1 with random_state = 42
| Configuration Name | Train (n_samples) | Test (n_samples) |
| ------------------------ | ----------------: | ---------------: |
| classification | 4239 | 472 |
| segmentation | 185 | 21 |
| classification-with-mask | 185 | 21 |
Modify the split proportion:
```python
rsna_ds = load_dataset("jherng/rsna-2023-abdominal-trauma-detection", "classification", test_size=0.05, random_state=42)
```
## Additional Information
### Citation Information
- Preprocessed dataset:
```
@InProceedings{huggingface:dataset,
title = {RSNA 2023 Abdominal Trauma Detection Dataset (Preprocessed)},
author={Hong Jia Herng},
year={2023}
}
```
- Original dataset:
```
@misc{rsna-2023-abdominal-trauma-detection,
author = {Errol Colak, Hui-Ming Lin, Robyn Ball, Melissa Davis, Adam Flanders, Sabeena Jalal, Kirti Magudia, Brett Marinelli, Savvas Nicolaou, Luciano Prevedello, Jeff Rudie, George Shih, Maryam Vazirabad, John Mongan},
title = {RSNA 2023 Abdominal Trauma Detection},
publisher = {Kaggle},
year = {2023},
url = {https://kaggle.com/competitions/rsna-2023-abdominal-trauma-detection}
}
```
| 12,061 | [
[
-0.041595458984375,
-0.0091400146484375,
0.023345947265625,
0.00827789306640625,
-0.05767822265625,
-0.003131866455078125,
0.0127716064453125,
-0.036163330078125,
0.041259765625,
0.0242919921875,
-0.041534423828125,
-0.047271728515625,
-0.0400390625,
0.03120... |
zhangshuoming/c_x86_exebench_json | 2023-10-11T06:55:07.000Z | [
"region:us"
] | zhangshuoming | null | null | 0 | 39 | 2023-10-10T10:19:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1909559448
num_examples: 868385
download_size: 515282168
dataset_size: 1909559448
---
# Dataset Card for "c_x86_exebench_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 457 | [
[
-0.047637939453125,
-0.017181396484375,
0.0155487060546875,
0.018341064453125,
-0.0209197998046875,
0.022735595703125,
0.00653076171875,
-0.0229644775390625,
0.0577392578125,
0.0382080078125,
-0.0460205078125,
-0.061859130859375,
-0.04217529296875,
-0.021667... |
csolheim/risk_sig_train_multilabel_FIN_25k | 2023-10-11T08:17:08.000Z | [
"region:us"
] | csolheim | null | null | 0 | 39 | 2023-10-11T07:59:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DDSC/angry-tweets-binary | 2023-10-11T08:25:19.000Z | [
"region:us"
] | DDSC | null | null | 0 | 39 | 2023-10-11T08:25:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 269093.3579427623
num_examples: 1559
- name: test
num_bytes: 120444.7564469914
num_examples: 684
download_size: 273118
dataset_size: 389538.1143897537
---
# Dataset Card for "angry-tweets-binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 608 | [
[
-0.0237579345703125,
-0.021881103515625,
0.01131439208984375,
0.040252685546875,
-0.031463623046875,
0.0248870849609375,
0.005542755126953125,
-0.0160064697265625,
0.068359375,
0.01373291015625,
-0.0594482421875,
-0.054718017578125,
-0.06195068359375,
-0.019... |
TheAIchemist13/kannada_asr_dataset | 2023-10-13T06:22:25.000Z | [
"region:us"
] | TheAIchemist13 | null | null | 0 | 39 | 2023-10-11T10:10:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 20136755.0
num_examples: 81
- name: test
num_bytes: 20136755.0
num_examples: 81
download_size: 38875566
dataset_size: 40273510.0
---
# Dataset Card for "kannada_asr_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 631 | [
[
-0.02734375,
-0.004756927490234375,
-0.0020313262939453125,
0.027130126953125,
-0.020782470703125,
0.005626678466796875,
0.020599365234375,
0.0025768280029296875,
0.058349609375,
0.02783203125,
-0.052734375,
-0.039398193359375,
-0.05084228515625,
-0.01421356... |
mirfan899/hindi-ner-small | 2023-10-18T09:29:19.000Z | [
"region:us"
] | mirfan899 | null | null | 0 | 39 | 2023-10-18T05:29:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': LOCATION
'1': BRAND
'2': TITLE_OBJECT
'3': PERSON
'4': DESIGNATION
'5': ORGANIZATION
'6': ABBREVIATION
'7': TIME
'8': NUMBER
'9': MEASURE
'10': TERMS
'11': O
splits:
- name: train
num_bytes: 8748829
num_examples: 7049
- name: validation
num_bytes: 3816340
num_examples: 3022
- name: test
num_bytes: 3816340
num_examples: 3022
download_size: 2199610
dataset_size: 16381509
---
# Dataset Card for "hindi-ner-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,046 | [
[
-0.033905029296875,
-0.0261993408203125,
-0.003063201904296875,
0.01183319091796875,
-0.02191162109375,
-0.0012807846069335938,
-0.01450347900390625,
-0.0073394775390625,
0.06976318359375,
0.01025390625,
-0.04949951171875,
-0.03857421875,
-0.055999755859375,
... |
Doub7e/SDv2-CLIP-aligned-6000 | 2023-10-27T09:16:10.000Z | [
"region:us"
] | Doub7e | null | null | 0 | 39 | 2023-10-19T00:07:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: type
dtype: string
- name: T5_last_hidden_states
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6017886905.25
num_examples: 6014
download_size: 2715834079
dataset_size: 6017886905.25
---
# Dataset Card for "SDv2-CLIP-aligned-6000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 621 | [
[
-0.031463623046875,
-0.001956939697265625,
0.00899505615234375,
0.0252685546875,
-0.0177459716796875,
0.00039839744567871094,
0.0421142578125,
0.0021209716796875,
0.06146240234375,
0.027008056640625,
-0.05694580078125,
-0.038299560546875,
-0.035186767578125,
... |
HoangHa/CulturaX001part | 2023-10-28T14:01:15.000Z | [
"region:us"
] | HoangHa | null | null | 0 | 39 | 2023-10-28T13:58:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JestemKamil/polishconversations | 2023-10-30T17:37:24.000Z | [
"task_categories:conversational",
"language:pl",
"region:us"
] | JestemKamil | null | null | 0 | 39 | 2023-10-28T15:10:14 | ---
language:
- pl
task_categories:
- conversational
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 19763353
num_examples: 50000
download_size: 10677634
dataset_size: 19763353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset stworzony na podstawie clips/mqa -- sformatowany pod modele typu GPT-2 | 407 | [
[
-0.0071563720703125,
-0.04351806640625,
0.01904296875,
0.01004791259765625,
-0.05413818359375,
-0.00168609619140625,
0.0081787109375,
0.0192718505859375,
0.053924560546875,
0.041259765625,
-0.059722900390625,
-0.0487060546875,
-0.059600830078125,
-0.00749969... |
cideon00/villm | 2023-10-29T12:35:53.000Z | [
"region:us"
] | cideon00 | null | null | 0 | 39 | 2023-10-29T12:35:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tok_len
dtype: int64
splits:
- name: train
num_bytes: 1411182336.1899912
num_examples: 512774
download_size: 328694427
dataset_size: 1411182336.1899912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "villm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 494 | [
[
-0.03594970703125,
-0.0274200439453125,
0.018890380859375,
0.0116424560546875,
-0.0087127685546875,
0.00714111328125,
0.00691986083984375,
-0.00836181640625,
0.05194091796875,
0.043792724609375,
-0.05755615234375,
-0.06414794921875,
-0.03704833984375,
-0.022... |
anforsm/movie_posters-100k-torchvision | 2023-10-30T15:06:04.000Z | [
"region:us"
] | anforsm | null | null | 1 | 39 | 2023-10-30T06:44:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: image
sequence:
sequence:
sequence: float32
- name: title
dtype: string
- name: genres
list:
- name: id
dtype: int64
- name: name
dtype: string
- name: overview
dtype: string
- name: popularity
dtype: float64
- name: release_date
dtype: string
- name: budget
dtype: int64
- name: revenue
dtype: int64
- name: tagline
dtype: string
- name: original_language
dtype: string
- name: runtime
dtype: int64
splits:
- name: train
num_bytes: 25531277848.2
num_examples: 85770
- name: test
num_bytes: 2836808649.8
num_examples: 9530
download_size: 20999210873
dataset_size: 28368086498.0
---
# Dataset Card for "movie_posters-100k-torchvision"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,084 | [
[
-0.038726806640625,
0.0005993843078613281,
0.01183319091796875,
0.0183258056640625,
-0.0254364013671875,
0.0087890625,
0.019744873046875,
0.0093994140625,
0.057220458984375,
0.0517578125,
-0.051727294921875,
-0.053466796875,
-0.053741455078125,
-0.0174102783... |
grafanalabs/promql-test-data | 2023-10-30T15:57:46.000Z | [
"region:us"
] | grafanalabs | null | null | 0 | 39 | 2023-10-30T15:57:28 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
teven/stackexchange | 2021-12-03T18:36:21.000Z | [
"region:us"
] | teven | null | null | 0 | 38 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
valurank/offensive-multi | 2022-10-25T09:57:14.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:derived",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | 0 | 38 | 2022-03-02T23:29:22 | ---
language:
- en
license: other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- derived
task_categories:
- text-classification
---
# Dataset Card for hate-multi
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
This dataset contains a collection of text labeled as offensive (class 1) or not (class 0).
## Dataset Creation
The dataset was creating by aggregating multiple publicly available datasets.
### Source Data
The following datasets were used:
* https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech'
* https://sites.google.com/site/offensevalsharedtask/olid - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
| 1,033 | [
[
-0.033843994140625,
-0.040924072265625,
-0.01617431640625,
0.01297760009765625,
-0.028961181640625,
0.0217437744140625,
-0.007465362548828125,
-0.016754150390625,
0.03778076171875,
0.0224761962890625,
-0.058349609375,
-0.0677490234375,
-0.0672607421875,
0.00... |
codyburker/yelp_review_sampled | 2022-03-05T17:29:30.000Z | [
"region:us"
] | codyburker | null | null | 0 | 38 | 2022-03-05T17:12:15 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
facebook/pmd | 2022-08-09T23:51:39.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2112.04482",
"arxiv:2111.11431",
"region:us... | facebook | Introduced in FLAVA paper, Public Multimodal Dataset (PMD) is a collection of publicly-available image-text pairs datasets. PMD in total contains 70M image-text pairs with 68M unique images. The dataset contains pairs from Conceptual Captions, Conceptual Captions 12M, WIT, Localized Narratives, RedCaps, COCO, SBU Captions, Visual Genome and a subset of YFCC100M dataset. | @inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
} | 28 | 38 | 2022-06-20T00:52:47 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: pmd
pretty_name: PMD
extra_gated_prompt: |
By clicking on “Access repository” below, you also agree to individual licensing terms for each of the subset datasets of the PMD as noted at https://huggingface.co/datasets/facebook/pmd#additional-information.
---
# Dataset Card for PMD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Compared to original FLAVA paper](#compared-to-original-flava-paper)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PMD homepage](https://flava-model.github.io/)
- **Repository:** [PMD repository](https://huggingface.co/datasets/facebook/pmd)
- **Paper:** [FLAVA: A Foundational Language And Vision Alignment Model
](https://arxiv.org/abs/2112.04482)
- **Leaderboard:**
- **Point of Contact:** [Amanpreet Singh](mailto:amanpreet@nyu.edu)
### Dataset Summary
Introduced in the FLAVA paper, Public Multimodal Dataset (PMD) is a collection of publicly-available image-text pair datasets. PMD contains 70M image-text pairs in total with 68M unique images. The dataset contains pairs from Conceptual Captions, Conceptual Captions 12M, WIT, Localized Narratives, RedCaps, COCO, SBU Captions, Visual Genome and a subset of YFCC100M dataset.
If you use PMD, please cite the original FLAVA paper as follows, along with the individual datasets (!! - see below for references):
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
```
You can load this dataset by first logging into Hugging Face using `huggingface-cli login` and then running the following commands:
```py
from datasets import load_dataset
pmd = load_dataset("facebook/pmd", use_auth_token=True)
```
You can also load the dataset in streaming mode if you don't want to download the big dataset files (> 50GB locally without the images):
```py
pmd = load_dataset("facebook/pmd", use_auth_token=True, streaming=True)
```
### Dataset Preprocessing
This dataset doesn't download all of the images locally by default. Instead, it exposes URLs for some of the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_data, timeout=None, retries=0):
image_url, image = image_data
if image is not None:
return image
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, zip(batch["image_url"], batch["image"])))
return batch
num_threads = 20
dset = load_dataset("pmd", use_auth_token=True)
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
#### Save to disk
You can also save the dataset to disk for faster and direct loading next time but beware of the space required:
```py
dset.save_to_disk(</path/to/save>)
```
#### Load Subsets
You can also download a specific set from the PMD dataset by using
```py
dset = load_dataset("pmd", <choice>, use_auth_token=True)
```
The choices are `
```
"all","coco","sbu", "wit", "localized_narratives","conceptual_captions","visual_genome","conceptual_captions_12M","redcaps","yfcc100M_subset", "localized_narratives_openimages","localized_narratives_ade20k", "localized_narratives_coco"
```
#### Flickr30K Localized Narratives Subset
The Flickr30K subset of Localized Narratives is not included by default as it requires a manual download. You can include it by downloading the tar file from [here](http://shannon.cs.illinois.edu/DenotationGraph/data/index.html) after signing an agreement to `</path/to/Downloads>` and then loading it whole PMD or localized narratives subset by:
```py
dset = load_dataset("pmd", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
# Load LN subset only
dset = load_dataset("pmd", "localized_narratives", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
```
#### Facing issues?
If you are facing issues, you can try loading a specific revision of the repo by using:
```py
dset = load_dataset("pmd", use_auth_token=True, revision="311cd48")
```
### Supported Tasks and Leaderboards
In the FLAVA paper, the dataset has been used to pretrain the FLAVA model as a source of well-aligned image-text pairs. This allows having a generic vision-and-language model which can be fine-tuned for a variety of tasks.
We anticipate that the dataset can be used to train deep neural networks that perform image captioning and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). We also anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subsets in PMD use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in PMD represents a single image-text pair:
```
{
'image_url': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCFF86A1E80>,
'text': 'A woman wearing a net on her head cutting a cake. ',
'source': 'coco',
'meta': '{\n "annotation": [\n "A woman wearing a net on her head cutting a cake. "\n ],\n "image_path": "zip:/val2014/COCO_val2014_000000522418.jpg::http:/images.cocodataset.org/zips/val2014.zip"\n}'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the text. Can be `None` if image is locally available.
- `image`: A PIL Image object for the image associated with the text. Can be `None` if image is not locally available.
- `text`: `str`, A textual description corresponding to the image.
- `source`: `str`, The PMD subset which this pair is from.
- `meta`: `str`, A json representation of the original annotation from the dataset.
### Data Splits
All the data is contained in the training set. The training set has nearly 70M instances.
We intend for this dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Thus, all of the instances should be used for pretraining. If required, we specifically make sure that there is no overlap with Karpathy's COCO validation set so users can use that subset for any validation purposes. Users can also load Karpathy's val subset by specifying the "validation" split while loading PMD. This will also load other "validation" splits for some subsets, if they are available.
## Dataset Creation
### Curation Rationale
From the paper:
> Purely contrastive methods, however, also have important shortcomings. Their cross-modal nature does not make them easily usable on multimodal problems that require dealing with both modalities at the same time. They require large corpora, which for both CLIP and ALIGN have not been made accessible to the research community and the details of which remain shrouded in mystery, notwithstanding well-known issues with the construction of such datasets
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
- For the YFCC100M dataset, we filter the image-text data by discarding non-English captions and only keeping captions that contain more than two words from the description field of each image, if this does not pass our filters we consider the title field. Other than that, we did not do any additional filtering.
- For the VisualGenome, COCO and Localized Narratives subsets, we remove any overlaps with Karpathy's COCO val and test sets.
- For Localized Narratives, we split the original caption which is a paragraph into multiple captions by using spaCy library and take the cartesan product leading to each sample as a separate image-text pair.
#### Compared to original FLAVA paper
The PMD dataset in this repo doesn't correspond 1:1 exactly to the original PMD dataset used in the [FLAVA](https://arxiv.org/abs/2112.04482) paper though this repo is built by the same authors. This is due to difficulty in reproducing WiT and YFCC100M subsets exactly. This repo in general contains more data than the PMD in the FLAVA paper and hence should probably result in better performance.
#### Who are the source language producers?
Please refer to the original dataset papers to understand where the content is coming from.
### Annotations
#### Annotation process
The dataset is a combination of existing public datasets with some filtering applied on top so there is no annotation process involved.
#### Who are the annotators?
Please refer to the original dataset papers to understand where the content is coming from.
### Personal and Sensitive Information
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
PMD is noisy by design since image-text pairs on the internet are noisy and unstructured. Though, since it contains sources such as COCO, Visual Genome, and Localized Narratives which are hand-curated by annotators, it has a lot of well-aligned data as well. So, it is definitely more aligned compared to e.g. LAION.
Some instances may also have duplicate images and captions but should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
Not that the authors know of. Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Additional Information
### Dataset Curators
The authors of the original dataset papers, as well as the authors of the FLAVA paper (Amanpreet, Ronghang, Vedanuj, Guillaume, Wojciech, Marcus and Douwe).
### Licensing Information
Here are the individual licenses from each of the datasets that apply if you use this dataset:
#### COCO
The annotations in the COCO dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
#### Conceptual Captions
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
#### WIT
This data is available under the [Creative Commons Attribution-ShareAlike 3.0 Unported](LICENSE) license.
#### Visual Genome
Visual Genome by Ranjay Krishna et al is licensed under a Creative Commons Attribution 4.0 International License.
#### Localized Narratives
All the annotations available through this website are released under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. You are free to redistribute and modify the annotations, but we ask you to please keep the original attribution to our paper.
#### YFCC100M
Use of the original media files is subject to the Creative Commons licenses chosen by their creators/uploaders. License information for each media file can be found within [the YFCC100M metadata](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/#yfcc100m). Use of the dataset is subject to the relevant Webscope License Agreement, which you need to agree to if you use this dataset.
#### RedCaps
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at https://www.redditinc.com/policies.
Similar to RedCaps:
> PMD should only be used for non-commercial research. PMD should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of PMD are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
Please cite the main FLAVA paper in which PMD was introduced along with each of the subsets used in PMD as follows:
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
@article{chen2015microsoft,
title={Microsoft coco captions: Data collection and evaluation server},
author={Chen, Xinlei and Fang, Hao and Lin, Tsung-Yi and Vedantam, Ramakrishna and Gupta, Saurabh and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
journal={arXiv preprint arXiv:1504.00325},
year={2015}
}
@inproceedings{ordonez2011sbucaptions,
Author = {Vicente Ordonez and Girish Kulkarni and Tamara L. Berg},
Title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
Booktitle = {Neural Information Processing Systems ({NIPS})},
Year = {2011},
}
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
@inproceedings{sharma2018conceptual,
title={Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning},
author={Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={2556--2565},
year={2018}
}
@inproceedings{changpinyo2021conceptual,
title={Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts},
author={Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3558--3568},
year={2021}
}
@inproceedings{ponttuset2020localized,
author = {Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari},
title = {Connecting Vision and Language with Localized Narratives},
booktitle = {ECCV},
year = {2020}
}
@article{thomee2016yfcc100m,
title={YFCC100M: The new data in multimedia research},
author={Thomee, Bart and Shamma, David A and Friedland, Gerald and Elizalde, Benjamin and Ni, Karl and Poland, Douglas and Borth, Damian and Li, Li-Jia},
journal={Communications of the ACM},
volume={59},
number={2},
pages={64--73},
year={2016},
publisher={ACM New York, NY, USA}
}
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@aps](https://github.com/apsdehal), [Thomas Wang](https://huggingface.co/TimeRobber), and [@VictorSanh](https://huggingface.co/VictorSanh) for adding this dataset. | 19,773 | [
[
-0.042083740234375,
-0.04949951171875,
0.002704620361328125,
0.0164642333984375,
-0.02935791015625,
-0.005252838134765625,
-0.0087127685546875,
-0.02728271484375,
0.0213470458984375,
0.042510986328125,
-0.052459716796875,
-0.0406494140625,
-0.039215087890625,
... |
neurae/coqa-preprocessed | 2022-09-20T17:57:06.000Z | [
"region:us"
] | neurae | null | null | 0 | 38 | 2022-09-20T12:41:54 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
frankier/processed_multiscale_rt_critics | 2023-10-03T17:16:04.000Z | [
"region:us"
] | frankier | null | null | 0 | 38 | 2022-11-02T12:15:25 | ---
dataset_info:
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: review_content
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: group_id
dtype: uint32
splits:
- name: train
num_bytes: 117244343
num_examples: 540256
- name: test
num_bytes: 28517095
num_examples: 131563
download_size: 0
dataset_size: 145761438
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "processed_multiscale_rt_critics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,040 | [
[
-0.068359375,
-0.0300140380859375,
0.027740478515625,
0.033111572265625,
-0.009368896484375,
0.003143310546875,
-0.009033203125,
-0.01009368896484375,
0.048675537109375,
0.03704833984375,
-0.06494140625,
-0.04058837890625,
-0.044769287109375,
-0.012954711914... |
bigbio/bioscope | 2022-12-22T15:44:13.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-2.0",
"region:us"
] | bigbio | The BioScope corpus consists of medical and biological texts annotated for
negation, speculation and their linguistic scope. This was done to allow a
comparison between the development of systems for negation/hedge detection and
scope resolution. The BioScope corpus was annotated by two independent linguists
following the guidelines written by our linguist expert before the annotation of
the corpus was initiated. | @article{vincze2008bioscope,
title={The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes},
author={Vincze, Veronika and Szarvas, Gy{\"o}rgy and Farkas, Rich{\'a}rd and M{\'o}ra, Gy{\"o}rgy and Csirik, J{\'a}nos},
journal={BMC bioinformatics},
volume={9},
number={11},
pages={1--9},
year={2008},
publisher={BioMed Central}
} | 0 | 38 | 2022-11-13T22:07:28 |
---
language:
- en
bigbio_language:
- English
license: cc-by-2.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_2p0
pretty_name: BioScope
homepage: https://rgai.inf.u-szeged.hu/node/105
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for BioScope
## Dataset Description
- **Homepage:** https://rgai.inf.u-szeged.hu/node/105
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The BioScope corpus consists of medical and biological texts annotated for
negation, speculation and their linguistic scope. This was done to allow a
comparison between the development of systems for negation/hedge detection and
scope resolution. The BioScope corpus was annotated by two independent linguists
following the guidelines written by our linguist expert before the annotation of
the corpus was initiated.
## Citation Information
```
@article{vincze2008bioscope,
title={The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes},
author={Vincze, Veronika and Szarvas, Gy{"o}rgy and Farkas, Rich{'a}rd and M{'o}ra, Gy{"o}rgy and Csirik, J{'a}nos},
journal={BMC bioinformatics},
volume={9},
number={11},
pages={1--9},
year={2008},
publisher={BioMed Central}
}
```
| 1,282 | [
[
-0.036468505859375,
-0.046630859375,
0.03289794921875,
-0.0095672607421875,
-0.0227203369140625,
-0.0012683868408203125,
0.00237274169921875,
-0.039703369140625,
0.04376220703125,
0.038665771484375,
-0.039581298828125,
-0.07293701171875,
-0.05413818359375,
0... |
bigbio/paramed | 2022-12-22T15:46:11.000Z | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"license:cc-by-4.0",
"region:us"
] | bigbio | NEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website.
English articles are distributed through https://www.nejm.org/ and Chinese articles are distributed through
http://nejmqianyan.cn/. The corpus contains all article pairs (around 2000 pairs) since 2011. | @article{liu2021paramed,
author = {Liu, Boxiang and Huang, Liang},
title = {ParaMed: a parallel corpus for English–Chinese translation in the biomedical domain},
journal = {BMC Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01621-8},
doi = {10.1186/s12911-021-01621-8}
} | 1 | 38 | 2022-11-13T22:11:13 |
---
language:
- en
- zh
bigbio_language:
- English
- Chinese
license: cc-by-4.0
multilinguality: multilingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: ParaMed
homepage: https://github.com/boxiangliu/ParaMed
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TRANSLATION
---
# Dataset Card for ParaMed
## Dataset Description
- **Homepage:** https://github.com/boxiangliu/ParaMed
- **Pubmed:** False
- **Public:** True
- **Tasks:** TRANSL
NEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website.
English articles are distributed through https://www.nejm.org/ and Chinese articles are distributed through
http://nejmqianyan.cn/. The corpus contains all article pairs (around 2000 pairs) since 2011.
## Citation Information
```
@article{liu2021paramed,
author = {Liu, Boxiang and Huang, Liang},
title = {ParaMed: a parallel corpus for English–Chinese translation in the biomedical domain},
journal = {BMC Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01621-8},
doi = {10.1186/s12911-021-01621-8}
}
```
| 1,223 | [
[
-0.005451202392578125,
-0.0245208740234375,
0.043731689453125,
0.0270538330078125,
-0.0303192138671875,
-0.007358551025390625,
-0.010955810546875,
-0.0173187255859375,
0.031280517578125,
0.03668212890625,
-0.0160064697265625,
-0.07269287109375,
-0.04071044921875... |
Isma/librispeech_1000_seed_42 | 2022-11-28T14:52:52.000Z | [
"region:us"
] | Isma | null | null | 0 | 38 | 2022-11-28T14:51:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
amphora/korfin-asc | 2023-01-16T15:26:46.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:klue",
"language:ko",
"license:cc-... | amphora | null | null | 1 | 38 | 2023-01-16T14:53:48 | ---
annotations_creators:
- expert-generated
language:
- ko
language_creators:
- expert-generated
license: cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: KorFin-ABSA
size_categories:
- 1K<n<10K
source_datasets:
- klue
tags:
- sentiment analysis
- aspect based sentiment analysis
- finance
task_categories:
- text-classification
task_ids:
- topic-classification
- sentiment-classification
---
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description) and
analyst reports from [Naver Finance](https://finance.naver.com).
Annotation of the dataset is described in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU+ 1분기 영업익 1천706억원…마케팅 비용 감소",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ASC")
```
Please find more information about the code and how the data was collected in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
The best-performing model on this dataset can be found at [link](https://huggingface.co/amphora/KorFinASC-XLM-RoBERTa).
### Licensing Information
KorFin-ASC is licensed under the terms of the [cc-by-sa-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
Please cite this data using:
```
@article{son2023removing,
title={Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
author={Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal={arXiv preprint arXiv:2301.03136},
year={2023}
}
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset. | 3,481 | [
[
-0.04388427734375,
-0.04217529296875,
0.01093292236328125,
0.025146484375,
-0.03790283203125,
0.0109405517578125,
-0.028228759765625,
-0.026458740234375,
0.0267486572265625,
0.04510498046875,
-0.032257080078125,
-0.08258056640625,
-0.039276123046875,
-0.0011... |
competitions/aiornot | 2023-03-30T12:32:32.000Z | [
"task_categories:image-classification",
"image-classification",
"autotrain",
"competitions",
"region:us"
] | competitions | null | null | 24 | 38 | 2023-01-25T15:22:37 | ---
task_categories:
- image-classification
tags:
- image-classification
- autotrain
- competitions
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: label
dtype: int64
---
# Dataset Card for aiornot
Dataset for the [aiornot competition](https://hf.co/spaces/competitions/aiornot).
By accessing this dataset, you accept the rules of the AI or Not competition.
Please note that dataset may contain images which are not considered safe for work.
## Usage
### With Hugging Face Datasets 🤗
You can download and use this dataset using the `datasets` library.
📝 **Note:** You must be logged in to you Hugging Face account for the snippet below to work. You can do this with `huggingface-cli login` or `huggingface_hub.notebook_login` if you have the `huggingface_hub` python library installed (`pip install huggingface_hub`).
```python
from datasets import load_dataset
ds = load_dataset('competitions/aiornot')
```
### From Original Files
The original files and sample submission can be found in the `.extras` folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the `datasets` library.
| 1,238 | [
[
-0.028228759765625,
-0.0306396484375,
-0.00646209716796875,
0.044281005859375,
-0.03533935546875,
-0.00958251953125,
-0.00363922119140625,
-0.0251922607421875,
0.047332763671875,
0.0171966552734375,
-0.05810546875,
-0.048309326171875,
-0.047393798828125,
0.0... |
swype/instruct | 2023-04-05T23:14:28.000Z | [
"license:mit",
"region:us"
] | swype | A dataset containing prompt and completion pairs for various tasks. | @misc{srikanth2023swypedataset,
author = {Srikanth Srinivas},
title = {Swype.com Dataset},
year = {2023},
publisher = {Swype.com},
howpublished = {\\url{https://swype.com}},
email = {s@swype.com}
} | 49 | 38 | 2023-03-29T02:48:16 | ---
license: mit
---
# A large instruct dataset
This dataset is a combination of multiple sources, including the GPT4All dataset, the Alpaca dataset from Stanford, custom generation using AllenAI augmentation, and some dataset augmentation from open-source Meta datasets. The dataset is split into 70% for training, 20% for validation, and 10% for testing.
## Description
The Swype.com dataset contains prompt and completion pairs for various tasks. It's an augmented version of the following datasets:
- [GPT4All](https://github.com/nomic-ai/gpt4all): A dataset containing a wide range of tasks for training and evaluating general-purpose language models.
- [Alpaca dataset from Stanford](https://github.com/tatsu-lab/stanford_alpaca): A dataset containing prompts, completions, and annotations for controllable text generation.
- Custom generation using [AllenAI augmentation](https://allenai.org): Augmentation performed using the advanced NLP tools provided by AllenAI.
- Some dataset augmentation from open-source Meta datasets: Additional augmentation from various open-source Meta datasets.
The dataset is designed for training and evaluating language models on diverse tasks, with a focus on controllable and instruction-based text generation.
## Dataset Structure
The dataset contains the following columns:
- `prompt`: The input prompt string, representing a task or question.
- `completion`: The output completion string, representing the answer or generated text based on the prompt.
## Citation
If you use this dataset in your research or work, please cite it as follows:
@misc{srikanth2023swypedataset,
author = {Srikanth Srinivas},
title = {Swype.com Dataset},
year = {2023},
publisher = {Swype.com},
howpublished = {\url{https://swype.com}},
email = {s@swype.com}
} | 1,792 | [
[
-0.0242156982421875,
-0.059478759765625,
0.0191802978515625,
0.01934814453125,
0.0096282958984375,
-0.005916595458984375,
-0.021392822265625,
-0.025482177734375,
0.01018524169921875,
0.04742431640625,
-0.050323486328125,
-0.035552978515625,
-0.0272369384765625,
... |
yuyang/bart_cnndm | 2023-05-08T22:12:43.000Z | [
"region:us"
] | yuyang | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | @article{DBLP:journals/corr/SeeLM17,
author = {Abigail See and
Peter J. Liu and
Christopher D. Manning},
title = {Get To The Point: Summarization with Pointer-Generator Networks},
journal = {CoRR},
volume = {abs/1704.04368},
year = {2017},
url = {http://arxiv.org/abs/1704.04368},
archivePrefix = {arXiv},
eprint = {1704.04368},
timestamp = {Mon, 13 Aug 2018 16:46:08 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/SeeLM17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{hermann2015teaching,
title={Teaching machines to read and comprehend},
author={Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil},
booktitle={Advances in neural information processing systems},
pages={1693--1701},
year={2015}
} | 0 | 38 | 2023-05-08T22:12:05 | Modification of the cnn_dailymail dataset in Hugging Face. The main goal is to reproduce the results on BART.
References: https://github.com/facebookresearch/fairseq/issues/1401
Major changes:
1. remove the space in " ." in fix_missing_period.
2. remove "(CNN)" in article. | 277 | [
[
-0.0377197265625,
-0.067626953125,
0.0200042724609375,
0.038848876953125,
-0.0225830078125,
-0.0079803466796875,
0.003162384033203125,
-0.053009033203125,
0.055511474609375,
0.045196533203125,
-0.06585693359375,
-0.039825439453125,
-0.049163818359375,
0.0152... |
cj-mills/hagrid-classification-512p-no-gesture-150k-zip | 2023-05-22T23:00:45.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | cj-mills | null | null | 0 | 38 | 2023-05-18T16:34:52 | ---
license: cc-by-sa-4.0
language:
- en
size_categories:
- 100K<n<1M
---
This dataset contains 153,735 training images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) modified for image classification instead of object detection. The original dataset is 716GB. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid) | 794 | [
[
-0.01140594482421875,
-0.0035839080810546875,
0.0089111328125,
-0.01464080810546875,
-0.03045654296875,
-0.006961822509765625,
0.01305389404296875,
-0.0124664306640625,
0.0239105224609375,
0.041748046875,
-0.0190582275390625,
-0.05169677734375,
-0.05197143554687... |
yankihue/tweets-turkish | 2023-05-23T09:42:27.000Z | [
"region:us"
] | yankihue | null | null | 0 | 38 | 2023-05-23T08:36:31 | Entry not found | 15 | [
[
-0.0213470458984375,
-0.01496124267578125,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.005046844482421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.01494598388671875,
-0.0604248046875,
0.03790... |
xmcmic/PMC-VQA | 2023-07-12T01:10:58.000Z | [
"region:us"
] | xmcmic | null | null | 18 | 38 | 2023-05-24T12:00:03 | # PMC-VQA Dataset
- [PMC-VQA Dataset](#pmc-vqa-dataset)
- [Daraset Structure](#daraset-structure)
- [Sample](#sample)
## Daraset Structure
**PMC-VQA** (version-1: 227k VQA pairs of 149k images).
- `train.csv`: metafile of train set
- `test.csv`: metafile of test set
- `test_clean.csv`: metafile of test clean set
- `images.zip`: images folder
-
(**update** version-2: noncompound images).
- `train2.csv`: metafile of train set
- `test2.csv`: metafile of test set
- `images2.zip`: images folder
-
## Sample
A row in `train.csv` is shown bellow,
| Figure_path | PMC1064097_F1.jpg |
| ------------ | --------------------------------------------- |
| Question | What is the uptake pattern in the breast? |
| Answer | Focal uptake pattern |
| Choice A | A:Diffuse uptake pattern |
| Choice B | B:Focal uptake pattern |
| Choice C | C:No uptake pattern |
| Choice D | D:Cannot determine from the information given |
| Answer_label | B |
Explanation to each key
- Figure_path: path to the image
- Question: question corresponding to the image
- Answer: the correct answer corresponding to the image
- Choice A: the provide choice A
- Choice B: the provide choice B
- Choice C: the provide choice C
- Choice D: the provide choice D
- Anwser_label: the correct answer label
| 1,485 | [
[
-0.0228424072265625,
-0.00452423095703125,
0.035552978515625,
0.02716064453125,
-0.035003662109375,
-0.008941650390625,
0.032196044921875,
0.01361083984375,
0.008514404296875,
0.047515869140625,
-0.06634521484375,
-0.0439453125,
-0.047332763671875,
0.0082397... |
pppppppppp2/planeperturbed | 2023-10-13T11:12:52.000Z | [
"region:us"
] | pppppppppp2 | null | null | 1 | 38 | 2023-06-08T19:52:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 647755473.5
num_examples: 5500
download_size: 622143522
dataset_size: 647755473.5
---
# Dataset Card for "planeperturbed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.04595947265625,
-0.0297393798828125,
0.0017795562744140625,
0.045074462890625,
-0.0163116455078125,
-0.0008668899536132812,
0.027557373046875,
-0.0010738372802734375,
0.06463623046875,
0.0604248046875,
-0.054229736328125,
-0.035736083984375,
-0.03701782226562... |
argilla/stackoverflow_feedback_demo | 2023-10-12T13:34:31.000Z | [
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 38 | 2023-06-23T15:59:47 | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for stackoverflow_feedback_demo
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/stackoverflow_feedback_demo")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/stackoverflow_feedback_demo")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| title | Title | FieldTypes.text | True | False |
| question | Question | FieldTypes.text | True | True |
| answer | Answer | FieldTypes.text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| title_question_fit | Does the title match the question? | QuestionTypes.label_selection | True | N/A | ['yes', 'no'] |
| tags | What are the topics mentioned in this question? | QuestionTypes.multi_label_selection | True | Select all that apply. | ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'] |
| answer_quality | Rate the quality of the answer: | QuestionTypes.rating | True | N/A | [1, 2, 3, 4, 5] |
| new_answer | If needed, correct the answer | QuestionTypes.text | False | If the rating is below 4, please provide a corrected answer | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"answer": "\u003cp\u003eUnfortunately the only API that isn\u0027t deprecated is located in the ApplicationServices framework, which doesn\u0027t have a bridge support file, and thus isn\u0027t available in the bridge. If you\u0027re wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.\u003c/p\u003e\r\n\r\n\u003cp\u003eCocoa doesn\u0027t have any native support, at least as of 10.5, for getting the location of a font.\u003c/p\u003e",
"question": "\u003cp\u003eI am using the Photoshop\u0027s javascript API to find the fonts in a given PSD.\u003c/p\u003e\n\n\u003cp\u003eGiven a font name returned by the API, I want to find the actual physical font file that that font name corresponds to on the disc.\u003c/p\u003e\n\n\u003cp\u003eThis is all happening in a python program running on OSX so I guess I\u0027m looking for one of:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eSome Photoshop javascript\u003c/li\u003e\n\u003cli\u003eA Python function\u003c/li\u003e\n\u003cli\u003eAn OSX API that I can call from python\u003c/li\u003e\n\u003c/ul\u003e\n",
"title": "How can I find the full path to a font from its display name on a Mac?"
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": null,
"values": {
"answer_quality": {
"value": 1
},
"new_answer": {
"value": "Sample answer"
},
"tags": {
"value": [
"tkinter"
]
},
"title_question_fit": {
"value": "yes"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"answer": "\u003cp\u003eUnfortunately the only API that isn\u0027t deprecated is located in the ApplicationServices framework, which doesn\u0027t have a bridge support file, and thus isn\u0027t available in the bridge. If you\u0027re wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.\u003c/p\u003e\r\n\r\n\u003cp\u003eCocoa doesn\u0027t have any native support, at least as of 10.5, for getting the location of a font.\u003c/p\u003e",
"answer_quality": [
{
"status": "submitted",
"user_id": null,
"value": 1
}
],
"answer_quality-suggestion": null,
"answer_quality-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"external_id": null,
"metadata": "{}",
"new_answer": [
{
"status": "submitted",
"user_id": null,
"value": "Sample answer"
}
],
"new_answer-suggestion": null,
"new_answer-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"question": "\u003cp\u003eI am using the Photoshop\u0027s javascript API to find the fonts in a given PSD.\u003c/p\u003e\n\n\u003cp\u003eGiven a font name returned by the API, I want to find the actual physical font file that that font name corresponds to on the disc.\u003c/p\u003e\n\n\u003cp\u003eThis is all happening in a python program running on OSX so I guess I\u0027m looking for one of:\u003c/p\u003e\n\n\u003cul\u003e\n\u003cli\u003eSome Photoshop javascript\u003c/li\u003e\n\u003cli\u003eA Python function\u003c/li\u003e\n\u003cli\u003eAn OSX API that I can call from python\u003c/li\u003e\n\u003c/ul\u003e\n",
"tags": [
{
"status": "submitted",
"user_id": null,
"value": [
"tkinter"
]
}
],
"tags-suggestion": null,
"tags-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"title": "How can I find the full path to a font from its display name on a Mac?",
"title_question_fit": [
{
"status": "submitted",
"user_id": null,
"value": "yes"
}
],
"title_question_fit-suggestion": null,
"title_question_fit-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **title** is of type `FieldTypes.text`.
* **question** is of type `FieldTypes.text`.
* **answer** is of type `FieldTypes.text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **title_question_fit** is of type `QuestionTypes.label_selection` with the following allowed values ['yes', 'no'].
* **tags** is of type `QuestionTypes.multi_label_selection` with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'], and description "Select all that apply.".
* **answer_quality** is of type `QuestionTypes.rating` with the following allowed values [1, 2, 3, 4, 5].
* (optional) **new_answer** is of type `QuestionTypes.text`, and description "If the rating is below 4, please provide a corrected answer".
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **title_question_fit-suggestion** is of type `QuestionTypes.label_selection` with the following allowed values ['yes', 'no'].
* (optional) **tags-suggestion** is of type `QuestionTypes.multi_label_selection` with the following allowed values ['python', 'django', 'python-2.7', 'list', 'python-3.x', 'numpy', 'pandas', 'regex', 'dictionary', 'string', 'matplotlib', 'arrays', 'google-app-engine', 'csv', 'tkinter', 'flask', 'json', 'linux', 'mysql', 'html', 'function', 'file', 'class', 'algorithm', 'windows', 'scipy', 'loops', 'multithreading', 'beautifulsoup', 'django-models', 'for-loop', 'javascript', 'xml', 'sqlalchemy', 'parsing', 'performance', 'datetime', 'osx', 'sorting', 'unicode', 'c++', 'dataframe', 'selenium', 'subprocess', 'pygame', 'java', 'pyqt', 'pip', 'tuples', 'scrapy'].
* (optional) **answer_quality-suggestion** is of type `QuestionTypes.rating` with the following allowed values [1, 2, 3, 4, 5].
* (optional) **new_answer-suggestion** is of type `QuestionTypes.text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 13,583 | [
[
-0.047271728515625,
-0.06524658203125,
0.025604248046875,
0.0277862548828125,
-0.0004734992980957031,
-0.00731658935546875,
0.00133514404296875,
-0.032257080078125,
0.0416259765625,
0.041412353515625,
-0.0457763671875,
-0.060821533203125,
-0.038330078125,
0.... |
ChanceFocus/flare-sm-bigdata | 2023-06-25T18:15:36.000Z | [
"region:us"
] | ChanceFocus | null | null | 1 | 38 | 2023-06-25T17:55:54 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 18720287
num_examples: 4897
- name: valid
num_bytes: 1278834
num_examples: 798
- name: test
num_bytes: 2379111
num_examples: 1472
download_size: 11003337
dataset_size: 22378232
---
# Dataset Card for "flare-sm-bigdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 654 | [
[
-0.051361083984375,
-0.01494598388671875,
0.0148162841796875,
0.018829345703125,
-0.01055145263671875,
0.01308441162109375,
0.00678253173828125,
-0.01180267333984375,
0.08013916015625,
0.032318115234375,
-0.058685302734375,
-0.040863037109375,
-0.03704833984375,... |
richardr1126/spider-context-validation | 2023-10-03T20:53:20.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"spider",
"validation",
"eval",
"spider-eval",
"region:us"
] | richardr1126 | null | null | 0 | 38 | 2023-06-26T16:56:50 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Validation
tags:
- text-to-sql
- SQL
- spider
- validation
- eval
- spider-eval
dataset_info:
features:
- name: db_id
dtype: string
- name: question
dtype: string
- name: db_info
dtype: string
- name: ground_truth
dtype: string
---
# Dataset Card for Spider Context Validation
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to validate spider-fine-tuned LLMs with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` | 1,450 | [
[
-0.0009307861328125,
-0.034820556640625,
0.0228271484375,
0.00894927978515625,
-0.0103607177734375,
0.01056671142578125,
-0.0067901611328125,
-0.0307769775390625,
0.018707275390625,
0.01407623291015625,
-0.0452880859375,
-0.061676025390625,
-0.0335693359375,
... |
Peppertuna/ChartQADatasetV2 | 2023-09-11T02:26:42.000Z | [
"region:us"
] | Peppertuna | ChartQA dataset demo | null | 2 | 38 | 2023-07-01T20:12:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ShoukanLabs/OpenNiji-0_32237 | 2023-08-16T02:40:59.000Z | [
"region:us"
] | ShoukanLabs | null | null | 0 | 38 | 2023-07-14T05:45:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: url
dtype: string
- name: prompt
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 53930726836.349
num_examples: 32237
download_size: 51827864474
dataset_size: 53930726836.349
---
# Dataset Card for "OpenNiji-0_32237"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.03765869140625,
-0.009429931640625,
0.003185272216796875,
0.0214996337890625,
-0.01102447509765625,
-0.0163421630859375,
0.0007081031799316406,
-0.0109100341796875,
0.0709228515625,
0.0489501953125,
-0.0537109375,
-0.050445556640625,
-0.035888671875,
0.00... |
techiaith/cofnodycynulliad_en-cy | 2023-08-14T10:56:17.000Z | [
"task_categories:translation",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"language:cy",
"license:other",
"region:us"
] | techiaith | null | null | 1 | 38 | 2023-07-21T09:25:51 | ---
license: other
task_categories:
- translation
- text-classification
- summarization
- sentence-similarity
language:
- en
- cy
pretty_name: Cofnod Y Cynulliad en-cy
size_categories:
- 100K<n<1M
---
# Dataset Card for cofnodycynulliad_en-cy
## Dataset Description
- **Homepage:** https://github.com/techiaith/cofnod-y-cynulliad_dataset
- **Repository:** https://github.com/techiaith/cofnod-y-cynulliad_dataset.git
- **Point of Contact:** techiaith@bangor.ac.uk
### Dataset Summary
This dataset consists of English-Welsh sentence pairs obtained by parsing the data provided from the [Welsh Parliament](https://cofnod.senedd.cymru/) website.
### Supported Tasks and Leaderboards
- translation
- text classification
- sentence similarity
### Languages
- English
- Welsh
## Dataset Structure
### Data Fields
- source
- target
### Data Splits
- train
## Dataset Creation
The dataset was created via an internal pipeline employing DVC and Python.
### Source Data
#### Initial Data Collection and Normalization
Sentences were dropped from the original scrapped sources in the following cases:
- sentence contained too many misspelt words
- sentence length similarity variance too great.
#### Who are the source language producers?
The language data, including source and target language data, is derived from transcripts of the proceedings of the Senedd's Plenary meetings and their translations.
See [here](https://cofnod.senedd.cymru) for information.
### Licensing Information
This dataset's source data is Crown copyright and is licensed under the [Open Government License](https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/). | 1,672 | [
[
-0.01544189453125,
-0.020111083984375,
0.01027679443359375,
0.00868988037109375,
-0.030487060546875,
-0.0138397216796875,
-0.030487060546875,
-0.0191802978515625,
0.034149169921875,
0.0478515625,
-0.059722900390625,
-0.05975341796875,
-0.035247802734375,
0.0... |
nlplabtdtu/food_wiki | 2023-08-04T03:13:47.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 38 | 2023-08-04T03:06:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wesley7137/qa_dataset | 2023-08-20T18:20:00.000Z | [
"region:us"
] | wesley7137 | null | null | 0 | 38 | 2023-08-20T18:19:19 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Skepsun/lawyer_llama_data | 2023-08-24T08:18:43.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"legal",
"region:us"
] | Skepsun | null | null | 2 | 38 | 2023-08-24T08:10:08 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- legal
size_categories:
- 10K<n<100K
---
基于[lawyer-llama](https://github.com/AndrewZhe/lawyer-llama)的开源数据进行了简单的整合,格式符合[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)的标准格式,source字段保存了数据的原始文件名。 | 301 | [
[
-0.00923919677734375,
-0.038421630859375,
0.019195556640625,
0.034759521484375,
-0.073974609375,
0.008209228515625,
0.0009918212890625,
-0.031494140625,
0.057464599609375,
0.033905029296875,
-0.0269012451171875,
-0.038909912109375,
-0.0450439453125,
0.006118... |
minh21/COVID-QA-testset-data | 2023-10-06T07:10:41.000Z | [
"region:us"
] | minh21 | null | null | 0 | 38 | 2023-09-25T07:02:29 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 16708455
num_examples: 201
download_size: 442083
dataset_size: 16708455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 639 | [
[
-0.03350830078125,
-0.0136260986328125,
-0.001468658447265625,
0.01055908203125,
-0.01739501953125,
0.00426483154296875,
0.0338134765625,
-0.00453948974609375,
0.058319091796875,
0.0070343017578125,
-0.054107666015625,
-0.049835205078125,
-0.022216796875,
-0... |
erhwenkuo/openorca-chinese-zhtw | 2023-09-26T22:30:01.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | erhwenkuo | null | null | 0 | 38 | 2023-09-26T15:36:15 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6491661288
num_examples: 4233915
download_size: 4106469779
dataset_size: 6491661288
language:
- zh
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: ' openorca-chinese-zhtw'
size_categories:
- 10M<n<100M
---
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openorca-chinese-zhtw"
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` | 8,980 | [
[
-0.039947509765625,
-0.06549072265625,
0.01531982421875,
0.00782012939453125,
-0.0037078857421875,
-0.0113372802734375,
-0.0207672119140625,
-0.053741455078125,
0.0271759033203125,
0.027862548828125,
-0.04644775390625,
-0.042724609375,
-0.02862548828125,
0.0... |
Eitanli/abstracts | 2023-10-11T12:10:07.000Z | [
"region:us"
] | Eitanli | null | null | 0 | 38 | 2023-10-11T12:09:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: recall
dtype: int64
- name: article_title
dtype: string
- name: topic
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 232927086.52719492
num_examples: 135922
- name: test
num_bytes: 29117171.077408876
num_examples: 16991
- name: valid
num_bytes: 29115457.395396195
num_examples: 16990
download_size: 157551845
dataset_size: 291159715.0
---
# Dataset Card for "abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 795 | [
[
-0.039398193359375,
-0.01447296142578125,
0.035858154296875,
0.01282501220703125,
-0.0229949951171875,
0.004520416259765625,
0.02105712890625,
-0.0265045166015625,
0.07342529296875,
0.01654052734375,
-0.047821044921875,
-0.0611572265625,
-0.052734375,
-0.002... |
pasupula/insurance-qa | 2023-10-18T03:06:55.000Z | [
"region:us"
] | pasupula | The Health Insurance Questions and Answers dataset provides a comprehensive collection of common inquiries related to health insurance, along with informative responses. This resource offers individuals, healthcare professionals, and organizations valuable insights into the complex world of health insurance. It covers topics such as the fundamentals of health insurance, its significance, obtaining coverage, covered services, and explanations of key terms like premium, deductible, and copayment. The dataset also delves into various types of health insurance plans, including Health Maintenance Organizations (HMOs), Preferred Provider Organizations (PPOs), and Exclusive Provider Organizations (EPOs). Moreover, it addresses the impact of pre-existing conditions on coverage eligibility and discusses options for adding family members to insurance plans. Additionally, it explores the concept of open enrollment periods and the benefits of Health Savings Accounts (HSAs) and Flexible Spending Accounts (FSAs) for managing healthcare expenses. This dataset is a valuable resource for anyone seeking to understand, compare, and make informed decisions about health insurance. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 38 | 2023-10-17T23:57:40 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
HoangLe1312/codeforces-problem-solution | 2023-10-22T11:59:17.000Z | [
"region:us"
] | HoangLe1312 | null | null | 0 | 38 | 2023-10-18T15:01:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
quyanh/lima | 2023-10-19T03:47:15.000Z | [
"region:us"
] | quyanh | null | null | 0 | 38 | 2023-10-19T03:13:48 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 3033066
num_examples: 1030
download_size: 1711487
dataset_size: 3033066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 476 | [
[
-0.0364990234375,
-0.0244903564453125,
0.0258331298828125,
0.038360595703125,
-0.034637451171875,
-0.0125274658203125,
0.031982421875,
-0.01904296875,
0.07177734375,
0.0433349609375,
-0.0589599609375,
-0.059478759765625,
-0.061004638671875,
-0.01292419433593... |
sam2ai/hindi_siqa_mini | 2023-10-19T21:47:43.000Z | [
"region:us"
] | sam2ai | null | null | 0 | 38 | 2023-10-19T20:55:47 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: int64
splits:
- name: validation
num_bytes: 23348
num_examples: 50
- name: train
num_bytes: 23348
num_examples: 50
download_size: 32064
dataset_size: 46696
---
# Dataset Card for "hindi_siqa_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 729 | [
[
-0.03717041015625,
-0.020263671875,
0.0004322528839111328,
0.016265869140625,
-0.0206756591796875,
0.0072174072265625,
0.0147247314453125,
0.0016412734985351562,
0.05889892578125,
0.0157623291015625,
-0.057952880859375,
-0.043304443359375,
-0.0557861328125,
... |
coastalcph/fm_aliases | 2023-10-20T08:18:20.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 38 | 2023-10-20T07:40:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sayakpaul/drawbench-upsampled-zephyr-7b-alpha | 2023-10-21T07:37:24.000Z | [
"region:us"
] | sayakpaul | null | null | 0 | 38 | 2023-10-21T07:37:21 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Upsampled Prompt
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 87868
num_examples: 200
download_size: 53341
dataset_size: 87868
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "drawbench-upsampled-zephyr-7b-alpha"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.03656005859375,
-0.006160736083984375,
0.01340484619140625,
0.0146026611328125,
-0.0303955078125,
0.01154327392578125,
0.017852783203125,
-0.0236663818359375,
0.06591796875,
0.036407470703125,
-0.058380126953125,
-0.03680419921875,
-0.03875732421875,
-0.0... |
atmallen/qm_alice_mixture_1.0e_0.5p_finetuning | 2023-10-27T05:48:07.000Z | [
"region:us"
] | atmallen | null | null | 0 | 38 | 2023-10-27T05:48:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: choices
sequence: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: true_label
dtype: bool
splits:
- name: train
num_bytes: 19204917
num_examples: 200000
- name: validation
num_bytes: 1933510
num_examples: 20000
- name: test
num_bytes: 1945676
num_examples: 20000
download_size: 6342808
dataset_size: 23084103
---
# Dataset Card for "qm_alice_mixture_1.0e_0.5p_finetuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 874 | [
[
-0.044677734375,
-0.0224456787109375,
0.0139312744140625,
0.0238189697265625,
-0.0165557861328125,
-0.005435943603515625,
0.033355712890625,
-0.0014514923095703125,
0.05609130859375,
0.03155517578125,
-0.049713134765625,
-0.051483154296875,
-0.03607177734375,
... |
Zahra032/qadataset | 2023-11-02T19:32:28.000Z | [
"region:us"
] | Zahra032 | null | null | 0 | 38 | 2023-10-29T14:55:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mallam-ai/marx-engels | 2023-10-31T06:26:43.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:pddl",
"region:us"
] | mallam-ai | null | null | 0 | 38 | 2023-10-31T05:57:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: content
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 20866538
num_examples: 1297
download_size: 11056454
dataset_size: 20866538
license: pddl
task_categories:
- text-generation
language:
- en
pretty_name: Marx and Engels Internet Archive
size_categories:
- 1K<n<10K
---
# Dataset Card for "marx-engels"
This dataset was generated by scraping https://www.marxists.org/archive/marx/index.htm
## Licensing Information
According to **marxists.org**, unless otherwise noted, texts in the archive are in the public domain.
See https://www.marxists.org/admin/janitor/faq.htm for further information. | 810 | [
[
-0.00658416748046875,
-0.0028057098388671875,
0.01410675048828125,
0.005886077880859375,
-0.01715087890625,
0.0011262893676757812,
0.0040283203125,
-0.021636962890625,
0.023345947265625,
0.0419921875,
-0.0279693603515625,
-0.04852294921875,
-0.035888671875,
... |
tomekkorbak/detoxify-pile-chunk3-50000-100000 | 2022-10-06T02:59:17.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 37 | 2022-10-03T18:47:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lambdalabs/naruto-blip-captions | 2022-10-27T21:17:06.000Z | [
"region:us"
] | lambdalabs | null | null | 12 | 37 | 2022-10-27T18:02:46 | # Dataset Card for Naruto BLIP captions
_Dataset used to train [TBD](TBD)._
The original images were obtained from [narutopedia.com](https://naruto.fandom.com/wiki/Narutopedia) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs

> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
## Citation
If you use this dataset, please cite it as:
```
@misc{cervenka2022naruto2,
author = {Cervenka, Eole},
title = {Naruto BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/}}
}
``` | 1,020 | [
[
-0.0174713134765625,
-0.0133819580078125,
-0.0184173583984375,
0.043121337890625,
-0.04412841796875,
0.0299530029296875,
-0.0160369873046875,
-0.0184478759765625,
0.037017822265625,
0.031829833984375,
-0.042999267578125,
-0.0185546875,
-0.0308990478515625,
0... |
fewshot-goes-multilingual/sk_csfd-movie-reviews | 2022-12-18T21:30:31.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sk",
"license:cc-by-sa-4.0",
"movie reviews",
"rat... | fewshot-goes-multilingual | null | null | 0 | 37 | 2022-12-18T21:28:17 | ---
annotations_creators:
- crowdsourced
language:
- sk
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: CSFD movie reviews (Slovak)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie reviews
- rating prediction
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for CSFD movie reviews (Slovak)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "sk")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
| 1,776 | [
[
-0.038543701171875,
-0.0228424072265625,
0.006031036376953125,
0.0477294921875,
-0.056243896484375,
0.002788543701171875,
-0.0158843994140625,
-0.0017423629760742188,
0.02093505859375,
0.05389404296875,
-0.07177734375,
-0.07257080078125,
-0.0278778076171875,
... |
keremberke/painting-style-classification | 2023-01-18T09:30:28.000Z | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"region:us"
] | keremberke | null | @misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
} | 3 | 37 | 2023-01-18T09:27:05 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="https://huggingface.co/datasets/keremberke/painting-style-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Realism', 'Art_Nouveau_Modern', 'Analytical_Cubism', 'Cubism', 'Expressionism', 'Action_painting', 'Synthetic_Cubism', 'Symbolism', 'Ukiyo_e', 'Naive_Art_Primitivism', 'Post_Impressionism', 'Impressionism', 'Fauvism', 'Rococo', 'Minimalism', 'Mannerism_Late_Renaissance', 'Color_Field_Painting', 'High_Renaissance', 'Romanticism', 'Pop_Art', 'Contemporary_Realism', 'Baroque', 'New_Realism', 'Pointillism', 'Northern_Renaissance', 'Early_Renaissance', 'Abstract_Expressionism']
```
### Number of Images
```json
{'valid': 1295, 'train': 4493, 'test': 629}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/painting-style-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/art-dataset/wiki-art/dataset/1](https://universe.roboflow.com/art-dataset/wiki-art/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| 2,169 | [
[
-0.041748046875,
-0.020660400390625,
0.0178375244140625,
-0.00435638427734375,
-0.015350341796875,
-0.0019016265869140625,
-0.0051727294921875,
-0.036865234375,
0.031707763671875,
0.032012939453125,
-0.04791259765625,
-0.0614013671875,
-0.042510986328125,
0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.