id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nielsr/image-segmentation-toy-data | 2022-11-08T15:08:25.000Z | [
"region:us"
] | nielsr | null | null | 0 | 8 | 2022-11-08T14:55:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pacovaldez/stackoverflow-questions | 2022-11-10T00:14:37.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technic... | pacovaldez | null | null | 30 | 8 | 2022-11-09T01:16:19 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
| 2,855 | [
[
-0.0703125,
-0.04644775390625,
0.010009765625,
0.018218994140625,
-0.01800537109375,
0.004970550537109375,
-0.006473541259765625,
-0.01288604736328125,
0.0286865234375,
0.0452880859375,
-0.03302001953125,
-0.04779052734375,
-0.05072021484375,
-0.005626678466... |
rifkiaputri/idk-mrc | 2023-05-23T07:43:23.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:... | rifkiaputri | null | null | 2 | 8 | 2022-11-11T05:56:43 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- id
language_creators:
- machine-generated
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: IDK-MRC
size_categories:
- 1K<n<10K
source_datasets:
- extended|tydiqa
tags: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for IDK-MRC
## Dataset Description
- **Repository:** [rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC)
- **Paper:** [PDF](https://aclanthology.org/2022.emnlp-main.465/)
- **Point of Contact:** [rifkiaputri](https://github.com/rifkiaputri)
### Dataset Summary
I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.
### Supported Tasks
IDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.
### Languages
Indonesian
## Dataset Structure
### Data Instances
```
{
"context": "Para ilmuwan menduga bahwa megalodon terlihat seperti hiu putih yang lebih kekar, walaupun hiu ini juga mungkin tampak seperti hiu raksasa (Cetorhinus maximus) atau hiu harimau-pasir (Carcharias taurus). Hewan ini dianggap sebagai salah satu predator terbesar dan terkuat yang pernah ada, dan fosil-fosilnya sendiri menunjukkan bahwa panjang maksimal hiu raksasa ini mencapai 18 m, sementara rata-rata panjangnya berkisar pada angka 10,5 m. Rahangnya yang besar memiliki kekuatan gigitan antara 110.000 hingga 180.000 newton. Gigi mereka tebal dan kuat, dan telah berevolusi untuk menangkap mangsa dan meremukkan tulang.",
"qas":
[
{
"id": "indonesian--6040202845759439489-1",
"is_impossible": false,
"question": "Apakah jenis hiu terbesar di dunia ?",
"answers":
[
{
"text": "megalodon",
"answer_start": 27
}
]
},
{
"id": "indonesian-0426116372962619813-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu terkecil di dunia?",
"answers":
[]
},
{
"id": "indonesian-2493757035872656854-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu betina terbesar di dunia?",
"answers":
[]
}
]
}
```
### Data Fields
Each instance has several fields:
- `context`: context passage/paragraph as a string
- `qas`: list of questions related to the `context`
- `id`: question ID as a string
- `is_impossible`: whether the question is unanswerable (impossible to answer) or not as a boolean
- `question`: question as a string
- `answers`: list of answers
- `text`: answer as a string
- `answer_start`: answer start index as an integer
### Data Splits
- `train`: 9,332 (5,042 answerable, 4,290 unanswerable)
- `valid`: 764 (382 answerable, 382 unanswerable)
- `test`: 844 (422 answerable, 422 unanswerable)
## Dataset Creation
### Curation Rationale
IDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.
### Annotations
#### Annotation process
In our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.
#### Who are the annotators?
We recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.
## Considerations for Using the Data
The paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.
Even so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.
## Additional Information
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
```
| 6,548 | [
[
-0.0521240234375,
-0.06341552734375,
0.01299285888671875,
-0.0015058517456054688,
-0.0214691162109375,
-0.01125335693359375,
-0.0079498291015625,
-0.0297088623046875,
0.01806640625,
0.048431396484375,
-0.04833984375,
-0.047027587890625,
-0.0255279541015625,
... |
bigbio/bio_simlex | 2022-12-22T15:43:27.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs). | @article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
} | 1 | 8 | 2022-11-13T22:06:24 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Bio-SimLex
homepage: https://github.com/cambridgeltl/bio-simverb
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for Bio-SimLex
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
| 1,279 | [
[
-0.006256103515625,
-0.026763916015625,
0.04656982421875,
0.01302337646484375,
-0.01544952392578125,
0.0022373199462890625,
-0.0101470947265625,
-0.0190277099609375,
0.018585205078125,
-0.0017604827880859375,
-0.0379638671875,
-0.053955078125,
-0.052398681640625... |
bigbio/evidence_inference | 2022-12-22T15:44:37.000Z | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | bigbio | The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple
treatments. Each of these articles will have multiple questions, or 'prompts' associated with them.
These prompts will ask about the relationship between an intervention and comparator with respect to an outcome,
as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared
to placebo on the duration of headaches. For the sake of this task, we assume that a particular article
will report that the intervention of interest either significantly increased, significantly decreased
or had significant effect on the outcome, relative to the comparator. | @inproceedings{deyoung-etal-2020-evidence,
title = "Evidence Inference 2.0: More Data, Better Models",
author = "DeYoung, Jay and
Lehman, Eric and
Nye, Benjamin and
Marshall, Iain and
Wallace, Byron C.",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.13",
pages = "123--132",
} | 1 | 8 | 2022-11-13T22:08:29 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: Evidence Inference 2.0
homepage: https://github.com/jayded/evidence-inference
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for Evidence Inference 2.0
## Dataset Description
- **Homepage:** https://github.com/jayded/evidence-inference
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple
treatments. Each of these articles will have multiple questions, or 'prompts' associated with them.
These prompts will ask about the relationship between an intervention and comparator with respect to an outcome,
as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared
to placebo on the duration of headaches. For the sake of this task, we assume that a particular article
will report that the intervention of interest either significantly increased, significantly decreased
or had significant effect on the outcome, relative to the comparator.
## Citation Information
```
@inproceedings{deyoung-etal-2020-evidence,
title = "Evidence Inference 2.0: More Data, Better Models",
author = "DeYoung, Jay and
Lehman, Eric and
Nye, Benjamin and
Marshall, Iain and
Wallace, Byron C.",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.13",
pages = "123--132",
}
```
| 1,766 | [
[
-0.0016431808471679688,
-0.0596923828125,
0.04290771484375,
0.01959228515625,
-0.02313232421875,
-0.030364990234375,
-0.01371002197265625,
-0.03826904296875,
0.0118560791015625,
0.01800537109375,
-0.0322265625,
-0.047088623046875,
-0.0555419921875,
0.0070190... |
bigbio/multi_xscience | 2022-12-22T15:45:44.000Z | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"arxiv:2010.14235",
"region:us"
] | bigbio | Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models. | @misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
} | 1 | 8 | 2022-11-13T22:10:18 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: Multi-XScience
homepage: https://github.com/yaolu/Multi-XScience
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- PARAPHRASING
- SUMMARIZATION
---
# Dataset Card for Multi-XScience
## Dataset Description
- **Homepage:** https://github.com/yaolu/Multi-XScience
- **Pubmed:** False
- **Public:** True
- **Tasks:** PARA,SUM
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| 1,803 | [
[
-0.01242828369140625,
-0.0208282470703125,
0.028228759765625,
0.00045037269592285156,
-0.01047515869140625,
0.01049041748046875,
-0.0157318115234375,
-0.03216552734375,
0.033966064453125,
0.018829345703125,
-0.033233642578125,
-0.041595458984375,
-0.044464111328... |
bigbio/ntcir_13_medweb | 2022-12-22T15:46:09.000Z | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | bigbio | NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires
to perform a multi-label classification that labels for eight diseases/symptoms must
be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n
labels for eight diseases/symptoms. The achievements of this task can almost be
directly applied to a fundamental engine for actual applications.
This task provides pseudo-Twitter messages in a cross-language and multi-label corpus,
covering three languages (Japanese, English, and Chinese), and annotated with eight
labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,
fever, runny nose, and cold.
For more information, see:
http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
As this dataset also provides a parallel corpus of pseudo-tweets for english,
japanese and chinese it can also be used to train translation models between
these three languages. | @article{wakamiya2017overview,
author = {Shoko Wakamiya, Mizuki Morita, Yoshinobu Kano, Tomoko Ohkuma and Eiji Aramaki},
title = {Overview of the NTCIR-13 MedWeb Task},
journal = {Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies (NTCIR-13)},
year = {2017},
url = {
http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings13/pdf/ntcir/01-NTCIR13-OV-MEDWEB-WakamiyaS.pdf
},
} | 0 | 8 | 2022-11-13T22:11:06 |
---
language:
- en
- zh
- ja
bigbio_language:
- English
- Chinese
- Japanese
license: cc-by-4.0
multilinguality: multilingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: NTCIR-13 MedWeb
homepage: http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TRANSLATION
- TEXT_CLASSIFICATION
---
# Dataset Card for NTCIR-13 MedWeb
## Dataset Description
- **Homepage:** http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
- **Pubmed:** False
- **Public:** False
- **Tasks:** TRANSL,TXTCLASS
NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires
to perform a multi-label classification that labels for eight diseases/symptoms must
be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n
labels for eight diseases/symptoms. The achievements of this task can almost be
directly applied to a fundamental engine for actual applications.
This task provides pseudo-Twitter messages in a cross-language and multi-label corpus,
covering three languages (Japanese, English, and Chinese), and annotated with eight
labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,
fever, runny nose, and cold.
For more information, see:
http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
As this dataset also provides a parallel corpus of pseudo-tweets for english,
japanese and chinese it can also be used to train translation models between
these three languages.
## Citation Information
```
@article{wakamiya2017overview,
author = {Shoko Wakamiya, Mizuki Morita, Yoshinobu Kano, Tomoko Ohkuma and Eiji Aramaki},
title = {Overview of the NTCIR-13 MedWeb Task},
journal = {Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies (NTCIR-13)},
year = {2017},
url = {
http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings13/pdf/ntcir/01-NTCIR13-OV-MEDWEB-WakamiyaS.pdf
},
}
```
| 2,056 | [
[
-0.01030731201171875,
-0.0251617431640625,
0.0184326171875,
0.032989501953125,
-0.021331787109375,
0.011871337890625,
-0.01406097412109375,
-0.053619384765625,
0.03509521484375,
0.01226043701171875,
-0.040740966796875,
-0.056671142578125,
-0.053985595703125,
... |
bigbio/pmc_patients | 2022-12-22T15:46:17.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2202.13876",
"region:us"
] | bigbio | This dataset is used for calculating the similarity between two patient descriptions. | @misc{zhao2022pmcpatients,
title={PMC-Patients: A Large-scale Dataset of Patient Notes and Relations Extracted from Case
Reports in PubMed Central},
author={Zhengyun Zhao and Qiao Jin and Sheng Yu},
year={2022},
eprint={2202.13876},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1 | 8 | 2022-11-13T22:11:31 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-sa-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_SA_4p0
pretty_name: PMC-Patients
homepage: https://github.com/zhao-zy15/PMC-Patients
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for PMC-Patients
## Dataset Description
- **Homepage:** https://github.com/zhao-zy15/PMC-Patients
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
This dataset is used for calculating the similarity between two patient descriptions.
## Citation Information
```
@misc{zhao2022pmcpatients,
title={PMC-Patients: A Large-scale Dataset of Patient Notes and Relations Extracted from Case
Reports in PubMed Central},
author={Zhengyun Zhao and Qiao Jin and Sheng Yu},
year={2022},
eprint={2202.13876},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 925 | [
[
0.00852203369140625,
-0.00377655029296875,
0.046722412109375,
0.0196685791015625,
-0.043212890625,
-0.0059356689453125,
-0.003795623779296875,
-0.00505828857421875,
0.0273590087890625,
0.032867431640625,
-0.045166015625,
-0.06610107421875,
-0.032684326171875,
... |
bigbio/scielo | 2022-12-22T15:46:40.000Z | [
"multilinguality:multilingual",
"language:en",
"language:es",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | bigbio | A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm. | @inproceedings{soares2018large,
title = {A Large Parallel Corpus of Full-Text Scientific Articles},
author = {Soares, Felipe and Moreira, Viviane and Becker, Karin},
year = 2018,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation (LREC-2018)
}
} | 1 | 8 | 2022-11-13T22:12:07 |
---
language:
- en
- es
- pt
bigbio_language:
- English
- Spanish
- Portuguese
license: cc-by-4.0
multilinguality: multilingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: SciELO
homepage: https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TRANSLATION
---
# Dataset Card for SciELO
## Dataset Description
- **Homepage:** https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
- **Pubmed:** False
- **Public:** True
- **Tasks:** TRANSL
A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.
## Citation Information
```
@inproceedings{soares2018large,
title = {A Large Parallel Corpus of Full-Text Scientific Articles},
author = {Soares, Felipe and Moreira, Viviane and Becker, Karin},
year = 2018,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation (LREC-2018)
}
}
```
| 1,236 | [
[
0.011932373046875,
-0.012451171875,
0.033416748046875,
0.053497314453125,
-0.02484130859375,
0.014556884765625,
-0.0206756591796875,
-0.03558349609375,
0.0352783203125,
0.02813720703125,
-0.03216552734375,
-0.056915283203125,
-0.03857421875,
0.04690551757812... |
bigbio/scifact | 2022-12-22T15:46:44.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | bigbio | {_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids. | @article{wadden2020fact,
author = {David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
title = {Fact or Fiction: Verifying Scientific Claims},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2020.emnlp-main.609},
doi = {10.18653/v1/2020.emnlp-main.609},
pages = {7534--7550},
biburl = {},
bibsource = {}
} | 0 | 8 | 2022-11-13T22:12:10 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-2.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_2p0
pretty_name: SciFact
homepage: https://scifact.apps.allenai.org/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_PAIRS_CLASSIFICATION
---
# Dataset Card for SciFact
## Dataset Description
- **Homepage:** https://scifact.apps.allenai.org/
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXT2CLASS
### Scifact Corpus Source
SciFact is a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
This config has abstracts and document ids.
### Scifact Claims Source
{_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids.
### Scifact Rationale Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("rationale", "not_rationale") indicating if the span is evidence (can be supporting or refuting) for the claim. This roughly corresponds to the second task outlined in Section 5 of the paper."
### Scifact Labelprediction Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("SUPPORT", "NOINFO", "CONTRADICT") indicating if the span supports, provides no info, or contradicts the claim. This roughly corresponds to the thrid task outlined in Section 5 of the paper.
## Citation Information
```
@article{wadden2020fact,
author = {David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
title = {Fact or Fiction: Verifying Scientific Claims},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2020.emnlp-main.609},
doi = {10.18653/v1/2020.emnlp-main.609},
pages = {7534--7550},
biburl = {},
bibsource = {}
}
```
| 2,148 | [
[
-0.0013713836669921875,
-0.035736083984375,
0.03533935546875,
0.0273590087890625,
0.008392333984375,
-0.00913238525390625,
-0.0059967041015625,
-0.046173095703125,
0.0244903564453125,
0.007381439208984375,
-0.029754638671875,
-0.024169921875,
-0.0562744140625,
... |
NeelNanda/c4-tokenized-2b | 2022-11-14T00:26:59.000Z | [
"region:us"
] | NeelNanda | null | null | 0 | 8 | 2022-11-14T00:15:38 | ---
dataset_info:
features:
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 11145289620
num_examples: 1359845
download_size: 2530851147
dataset_size: 11145289620
---
# Dataset Card for "c4-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 375 | [
[
-0.025848388671875,
-0.0186004638671875,
0.007843017578125,
0.029510498046875,
-0.024627685546875,
0.0180511474609375,
0.0189208984375,
-0.031890869140625,
0.057098388671875,
0.029937744140625,
-0.047607421875,
-0.052978515625,
-0.04681396484375,
-0.01041412... |
lm4pt/bpsad | 2022-11-23T19:20:11.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:sentiment-analysis",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:pt",
"license:unknown",
... | lm4pt | The Brazilian Portuguese Sentiment Analysis Dataset (BPSAD) is composed
by the concatenation of 5 differents sources (Olist, B2W Digital, Buscapé,
UTLC-Apps and UTLC-Movies), each one is composed by evaluation sentences
classified according to the polarity (0: negative; 1: positive) and ratings
(1, 2, 3, 4 and 5 stars). | @inproceedings{souza2021sentiment,
author={
Souza, Frederico Dias and
Baptista de Oliveira e Souza Filho, João},
booktitle={
2021 IEEE Latin American Conference on
Computational Intelligence (LA-CCI)},
title={
Sentiment Analysis on Brazilian Portuguese User Reviews},
year={2021},
pages={1-6},
doi={10.1109/LA-CCI48322.2021.9769838}
} | 3 | 8 | 2022-11-21T15:37:12 | ---
annotations_creators: []
language:
- pt
language_creators:
- other
license:
- unknown
multilinguality:
- monolingual
pretty_name: bpsad
size_categories:
- 1M<n<10M
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
- sentiment-scoring
- sentiment-analysis
---
# Dataset Card for Brazilian Portuguese Sentiment Analysis Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle Dataset](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets)
- **Paper:** [Sentiment Analysis on Brazilian Portuguese User Reviews](https://ieeexplore.ieee.org/abstract/document/9769838)
- **Point of Contact:** [Frederico Dias Souza](fredericods@poli.ufrj.br)
### Dataset Summary
**Disclaimer:** *The team releasing the dataset did not write a dataset card
for this dataset so this dataset card has been written by the contributors.*
The Brazilian Portuguese Sentiment Analysis Dataset (BPSAD) is composed
by the concatenation of 5 differents sources (Olist, B2W Digital, Buscapé,
UTLC-Apps and UTLC-Movies), each one is composed by evaluation sentences
classified according to the polarity (0: negative; 1: positive) and ratings
(1, 2, 3, 4 and 5 stars).
This dataset requires manual download:
1. Download the `concatenated` file from dataset homepage.
2. Extract the file inside `<path/to/manual/data>`.
3. Load the dataset using the command:
```python
datasets.load_dataset(
path="lm4pt/bpsad",
name='<polarity|rating>',
data_dir='<path/to/manual/data>')
```
A detailed description about the dataset and the processing steps can be
found at the [dataset homepage](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets).
### Supported Tasks and Leaderboards
The dataset contains two configurations that represents the possible tasks
related to sentiment analysis. The polarity classification is a binary
classification problem where the sentences must be classified as positive (1)
or negative (0) reviews. The rating prediction is a multiclass problem
with values ranging from 1 to 5 stars.
### Languages
The texts are in Brazilian Portuguese, as spoken by users of different e-commerces and Filmow social network.
## Dataset Structure
### Data Instances
#### polarity
```
{
"review_text": "Bem macio e felpudo...recomendo. Preço imbatível e entrega rápida. Compraria outro quando precisar",
"polarity": 1
}
```
#### rating
```
{
"review_text": "Bem macio e felpudo...recomendo. Preço imbatível e entrega rápida. Compraria outro quando precisar",
"rating": 4
}
```
### Data Fields
#### polarity
- `review_text`: a `string` feature with product or movie review.
- `polarity`: an `integer` value that represents positive (1) or negative (0) reviews.
#### rating
- `review_text`: a `string` feature with product or movie review.
- `rating`: an `integer` value that represents the number of stars given by the reviewer. Possible values are 1, 2, 3, 4 and 5.
### Data Splits
Data splits are created based on the original `kfold` column of each configuration, following the original authors recomendations:
- train: folds 1 to 8
- validation: fold 9
- test: fold 10
| | train | validation | test |
|----------|--------:|-----------:|-------:|
| polarity | 1908937 | 238614 | 238613 |
| rating | 2228877 | 278608 | 278607 |
More information about sentence size and label distribution can be found in the [dataset homepage](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{souza2021sentiment,
author={
Souza, Frederico Dias and
Baptista de Oliveira e Souza Filho, João},
booktitle={
2021 IEEE Latin American Conference on
Computational Intelligence (LA-CCI)},
title={
Sentiment Analysis on Brazilian Portuguese User Reviews},
year={2021},
pages={1-6},
doi={10.1109/LA-CCI48322.2021.9769838}
}
```
### Contributions
Thanks to [@guilhermelmello](https://huggingface.co/guilhermelmello) and [@DominguesPH](https://huggingface.co/DominguesPH) for adding this dataset. | 6,050 | [
[
-0.0531005859375,
-0.0433349609375,
0.0018529891967773438,
0.04461669921875,
-0.0498046875,
0.003993988037109375,
-0.0261993408203125,
-0.0183258056640625,
0.0325927734375,
0.038970947265625,
-0.04034423828125,
-0.07586669921875,
-0.05474853515625,
0.0183105... |
deutsche-telekom/ger-backtrans-paraphrase | 2023-06-12T17:46:57.000Z | [
"task_categories:sentence-similarity",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:de",
"license:cc-by-sa-4.0",
"arxiv:1907.05791",
"arxiv:2004.09813",
"region:us"
] | deutsche-telekom | null | null | 7 | 8 | 2022-11-21T19:24:43 | ---
license:
- cc-by-sa-4.0
language:
- de
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
task_categories:
- sentence-similarity
---
# German Backtranslated Paraphrase Dataset
This is a dataset of more than 21 million German paraphrases.
These are text pairs that have the same meaning but are expressed with different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German to obtain the paraphrases.
This dataset can be used for example to train semantic text embeddings.
To do this, for example, [SentenceTransformers](https://www.sbert.net/)
and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
can be used.
## Maintainers
[](https://www.welove.ai/)
This dataset is open sourced by [Philip May](https://may.la/)
and maintained by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Our pre-processing
Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
- We dropped text pairs where one text was longer than 499 characters.
- In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
## Your post-processing
You probably don't want to use the dataset as it is, but filter it further.
This is what the additional columns of the dataset are for.
For us it has proven useful to delete the following pairs of sentences:
- `min_char_len` less than 15
- `jaccard_similarity` greater than 0.3
- `de_token_count` greater than 30
- `en_de_token_count` greater than 30
- `cos_sim` less than 0.85
## Columns description
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
- **`en`**: the original English texts from the corpus
- **`de`**: the original German texts from the corpus
- **`en_de`**: the German texts translated back from English (from `en`)
- **`corpus`**: the name of the corpus
- **`min_char_len`**: the number of characters of the shortest text
- **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
- **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
## Anomalies in the texts
It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this:
```
- Hast du was draufgetan?
```
To remove them you could apply this function:
```python
import re
def clean_text(text):
text = re.sub("^[-\s]*", "", text)
text = re.sub("[-\s]*$", "", text)
return text
df["de"] = df["de"].apply(clean_text)
df["en_de"] = df["en_de"].apply(clean_text)
```
## Parallel text corpora used
| Corpus name & link | Number of paraphrases |
|-----------------------------------------------------------------------|----------------------:|
| [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 |
| [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 |
| [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 |
| [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 |
| [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
| **sum** |. **21,292,789** |
## Back translation
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
We used the `transformer.wmt19.en-de` model for this purpose:
```python
en2de = torch.hub.load(
"pytorch/fairseq",
"transformer.wmt19.en-de",
checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
tokenizer="moses",
bpe="fastbpe",
)
```
## How the Jaccard similarity was calculated
To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
to split the texts into tokens.
We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:
```python
from somajo import SoMaJo
LANGUAGE = "de_CMC"
somajo_tokenizer = SoMaJo(LANGUAGE)
def get_token_set(text, somajo_tokenizer):
sentences = somajo_tokenizer.tokenize_text([text])
tokens = [t.text.lower() for sentence in sentences for t in sentence]
token_set = set(tokens)
return token_set
def jaccard_similarity(text1, text2, somajo_tokenizer):
token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
intersection = token_set1.intersection(token_set2)
union = token_set1.union(token_set2)
jaccard_similarity = float(len(intersection)) / len(union)
return jaccard_similarity
```
## Load this dataset
### With Hugging Face Datasets
```python
# pip install datasets
from datasets import load_dataset
dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase")
train_dataset = dataset["train"]
```
### With Pandas
If you want to download the csv file and then load it with Pandas you can do it like this:
```python
df = pd.read_csv("train.csv")
```
## Citations, Acknowledgements and Licenses
**OpenSubtitles**
- citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
- also see http://www.opensubtitles.org/
- license: no special license has been provided at OPUS for this dataset
**WikiMatrix v1**
- citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
- license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
**Tatoeba v2022-03-03**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/)
- copyright: https://tatoeba.org/eng/terms_of_use
**TED2020 v1**
- citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020
- acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
- license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
**News-Commentary v16**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
**GlobalVoices v2018q4**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
## Citation
```latex
@misc{ger-backtrans-paraphrase,
title={Deutsche-Telekom/ger-backtrans-paraphrase - dataset at Hugging Face},
url={https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase},
year={2022},
author={May, Philip}
}
```
## Licensing
Copyright (c) 2022 [Philip May](https://may.la/),
[Deutsche Telekom AG](https://www.telekom.com/)
This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
| 9,064 | [
[
-0.035797119140625,
-0.055419921875,
0.0343017578125,
0.0193328857421875,
-0.03057861328125,
-0.01922607421875,
-0.032318115234375,
-0.005542755126953125,
0.0218353271484375,
0.041229248046875,
-0.04022216796875,
-0.051849365234375,
-0.040008544921875,
0.031... |
kasnerz/scigen | 2023-03-14T15:07:29.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 8 | 2022-11-28T10:47:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cjvt/cc_gigafida | 2023-01-17T13:11:14.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100M<n<1B",
"language:sl... | cjvt | The ccGigafida corpus contains a subsample of the Gigafida corpus. The Gigafida corpus is an extensive collection of
Slovene text of various genres, from daily newspapers, magazines, all kinds of books (fiction, non-fiction, textbooks),
web pages, transcriptions of parliamentary debates and similar. | @misc{ccGigafida,
title = {Written corpus {ccGigafida} 1.0},
author = {Logar, Nata{\v s}a and Erjavec, Toma{\v z} and Krek, Simon and Gr{\v c}ar, Miha and Holozan, Peter},
url = {http://hdl.handle.net/11356/1035},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
issn = {2820-4042},
year = {2013}
} | 0 | 8 | 2022-11-29T15:03:45 | ---
annotations_creators:
- no-annotation
language:
- sl
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Written corpus ccGigafida 1.0
size_categories:
- 10K<n<100K
- 100M<n<1B
source_datasets: []
tags:
- gigafida
- gigafida2
- kres
- cckres
- reference corpus
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# Dataset Card for ccGigafida
This repository by default loads the publicly available dataset ccGigafida, which contains a small subset of the Gigafida/Gigafida2 corpus.
The full datasets are private due to copyright. **If you happen to have access to the full datasets, the script will also work with those.**
Instead of
```
datasets.load_dataset("cjvt/cc_gigafida")
```
please use
```
datasets.load_dataset("cjvt/cc_gigafida", "private", data_dir="<directory-containing-gigafida(2)-TEI-files>")
```
**IMPORTANT:** The script will process all `.xml` files in the provided directory and its subdirectories - make sure there are no schema or metadata files in there!
### Dataset Summary
ccGigafida is a reference corpus of Slovene texts. It is a publicly available subsample of an even larger reference corpus, Gigafida (and its successor Gigafida 2).
The Gigafida corpus is an extensive collection of Slovene text of various genres, from daily newspapers, magazines, all kinds of books (fiction, non-fiction, textbooks),
web pages, transcriptions of parliamentary debates and similar.
### Supported Tasks and Leaderboards
Language modeling.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
The data is loaded at document-level, i.e. one instance is one document.
```
{
'id_doc': 'F0000123',
'doc_title': 'Novi tednik NT&RC',
'authors': ['neznani novinar'],
'publish_date': '1998-03-27',
'publisher': 'Novi tednik',
'genres': ['tisk/periodično/časopis'],
'doc_tokenized': [
[
['Po', 'nekajletnem', 'počitku', 'pa', 'se', 'vračajo', 'tudi', 'kralji', 'dark', 'rock', 'godbe', 'JESUS', 'AND', 'THE', 'MARY', 'CHAIN', '.'],
['Brata', 'Reid', 'bosta', 'svojo', 'najnovejšo', 'kreacijo', '»', 'Cracking', 'Up', '«', 'objavila', 'v', 'ponedeljek', 'pri', 'trenutno', 'najuspešnejši', 'neodvisni', 'založbi', 'Creation', '(', 'vodi', 'jo', 'njun', 'nekdanji', 'menager', 'Alan', 'McGee', ',', 'zanjo', 'pa', 'poleg', 'Oasis', 'snema', 'še', 'cel', 'kup', 'popularnih', 'brit', '-', 'popovcev', ')', ',', 'tej', 'pa', 'bo', 'kmalu', 'sledil', 'tudi', 'album', '»', 'Munki', '«', '.']
],
[
['Kultni', 'ameriški', 'tehno', 'freak', 'PLASTIKMAN', 'že', 'vrsto', 'let', 'velja', 'za', 'enega', 'izmed', 'najbolj', 'inovativnih', 'in', 'produktivnih', 'ustvarjalcev', 'sodobne', 'elektronske', 'glasbe', '.'],
['Za', 'založbo', 'Nova', 'Mute', 'je', 'v', 'preteklih', 'nekaj', 'letih', 'posnel', 'cel', 'kup', 'izvrstnih', 'underground', 'dance', 'glasbenih', 'izdelkov', ',', 'pred', 'nedavnim', 'pa', 'je', 'ljubitelje', 'tovrstne', 'godbe', 'presenetil', 'z', 'ambientalnimi', 'odisejadami', ',', 'zbranimi', 'na', 'LP-ju', '»', 'Refused', '«', ',', 'ki', 'ga', 'lahko', 'od', 'prejšnjega', 'ponedeljka', 'kupite', 'tudi', 'v', 'bolje', 'založenih', 'trgovinah', 'z', 'nosilci', 'zvoka', 'na', 'sončni', 'strani', 'Alp', '.']
],
[
['STANE', 'ŠPEGEL']
]
],
'doc_lemmas': [...],
'doc_msds': [...],
'doc_string': [
[
'Po nekajletnem počitku pa se vračajo tudi kralji dark rock godbe JESUS AND THE MARY CHAIN. ',
'Brata Reid bosta svojo najnovejšo kreacijo »Cracking Up« objavila v ponedeljek pri trenutno najuspešnejši neodvisni založbi Creation (vodi jo njun nekdanji menager Alan McGee, zanjo pa poleg Oasis snema še cel kup popularnih brit-popovcev), tej pa bo kmalu sledil tudi album »Munki«.'
],
[
'Kultni ameriški tehno freak PLASTIKMAN že vrsto let velja za enega izmed najbolj inovativnih in produktivnih ustvarjalcev sodobne elektronske glasbe. ',
'Za založbo Nova Mute je v preteklih nekaj letih posnel cel kup izvrstnih underground dance glasbenih izdelkov, pred nedavnim pa je ljubitelje tovrstne godbe presenetil z ambientalnimi odisejadami, zbranimi na LP-ju »Refused«, ki ga lahko od prejšnjega ponedeljka kupite tudi v bolje založenih trgovinah z nosilci zvoka na sončni strani Alp.'
],
[
'STANE ŠPEGEL'
]
],
'id_sents': [['F0000123.000005.0', 'F0000123.000005.1'], ['F0000123.000013.0', 'F0000123.000013.1'], ['F0000123.000020.0']]
}
```
### Data Fields
- `id_doc`: the document ID (string);
- `doc_title`: the document title (string);
- `authors`: author(s) of the document (list of string): "neznani novinar" (sl) = ("unknown/unspecified journalist");
- `publish_date`: publish date (string);
- `publisher`: publisher, e.g., the name of a news agency (string);
- `genres`: genre(s) of the document (list of string) - possible genres: `['tisk', 'tisk/knjižno', 'tisk/knjižno/leposlovno', 'tisk/knjižno/strokovno', 'tisk/periodično', 'tisk/periodično/časopis', 'tisk/periodično/revija', 'tisk/drugo', 'internet']`;
- `doc_tokenized`: tokenized document - the top level lists represent paragraphs, the lists in the level deeper represent sentences, and each sentence contains tokens;
- `doc_lemmas`: lemmatized document - same structure as `doc_tokenized`;
- `doc_msds`: MSD tags of the document - same structure as `doc_tokenized` ([tagset](http://nl.ijs.si/ME/V6/msd/html/msd-sl.html));
- `doc_string`: same as `doc_tokenized` but with properly placed spaces in sentences;
- `id_sents`: IDs of sentences contained inside paragraphs of the document.
## Dataset Creation
Gigafida consists of texts which were published between 1990 and 2011. The texts come from printed sources and from the web.
Printed part contains fiction, non-fiction and textbooks, and periodicals such as daily newspapers and magazines.
Texts originating from the web were published on news portals, pages of big Slovene companies and more important governmental,
educational, research, cultural and similar institutions.
For more information, please check http://eng.slovenscina.eu/korpusi/gigafida.
## Additional Information
### Dataset Curators
Nataša Logar; et al. (please see http://hdl.handle.net/11356/1035 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{ccGigafida,
title = {Written corpus {ccGigafida} 1.0},
author = {Logar, Nata{\v s}a and Erjavec, Toma{\v z} and Krek, Simon and Gr{\v c}ar, Miha and Holozan, Peter},
url = {http://hdl.handle.net/11356/1035},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
issn = {2820-4042},
year = {2013}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| 6,963 | [
[
-0.0355224609375,
-0.03839111328125,
0.021453857421875,
0.0160980224609375,
-0.030517578125,
-0.00023162364959716797,
-0.0131378173828125,
-0.0086669921875,
0.04449462890625,
0.03851318359375,
-0.0467529296875,
-0.0655517578125,
-0.045196533203125,
0.0200195... |
wanghaofan/pokemon-wiki-captions | 2022-12-09T12:50:49.000Z | [
"region:us"
] | wanghaofan | null | null | 5 | 8 | 2022-12-09T11:13:28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: name_en
dtype: string
- name: name_zh
dtype: string
- name: text_en
dtype: string
- name: text_zh
dtype: string
splits:
- name: train
num_bytes: 117645424.0
num_examples: 898
download_size: 117512478
dataset_size: 117645424.0
---
# Dataset Card for Pokémon wiki captions
This project is inspired by [pokmon-blip-captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions), where the captions are all generated by pre-trained BLIP without any manual effort.
However, the quality and accuracy of their captions are not satisfactory enough, which leaves it known whether better captions lead to better results. This motivates our dataset.
# Example

> General attribute, looks like a little monkey, body color is composed of purple and beige, the end of the tail is like a hand

> Poisonous attributes, it looks like a huge purple cobra, with black stripes on its body, small head, and triangular eyes
# Properties
All 898 images are from [The Complete Pokemon Images Data Set](https://www.kaggle.com/datasets/arenagrenade/the-complete-pokemon-images-data-set?resource=download) in Kaggle with size 475x475. Each image is accompanied with corresponding
pokemon name and its detailed description from [Pokemon Wiki](https://wiki.52poke.com/wiki/%E4%B8%BB%E9%A1%B5), English and Chinese captions are provided. Human efforts are also involved to revise.
# How to use
```
from datasets import load_dataset
dataset = load_dataset("wanghaofan/pokemon-wiki-captions")
```
The dataset is formatted as below. For each row the dataset contains `image`, `name_en`, `name_zh`, `text_en` and `text_zh` keys. `image` is a varying size PIL jpeg, `name` is the name of pokemon, and `text` is the accompanying text caption. Only a train split is provided.
```
DatasetDict({
train: Dataset({
features: ['image', 'name_en', 'name_zh', 'text_en', 'text_zh'],
num_rows: 898
})
})
```
# Citation
If you use this dataset in your work, please cite it as:
```
@misc{wanghaofan2022pokemon,
author = {Haofan Wang},
title = {Pokemon wiki captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/wanghaofan/pokemon-wiki-captions/}}
}
```
| 4,063 | [
[
-0.031036376953125,
-0.01030731201171875,
-0.0065765380859375,
0.0121307373046875,
-0.04302978515625,
-0.00591278076171875,
-0.0113372802734375,
-0.03472900390625,
0.06805419921875,
0.029266357421875,
-0.043243408203125,
-0.0296478271484375,
-0.0235443115234375,... |
gagan3012/IAM | 2023-10-13T18:13:25.000Z | [
"region:us"
] | gagan3012 | null | null | 0 | 8 | 2022-12-21T05:12:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Noto_Sans_Arabic
'1': Readex_Pro
'2': Amiri
'3': Noto_Kufi_Arabic
'4': Reem_Kufi_Fun
'5': Lateef
'6': Changa
'7': Kufam
'8': ElMessiri
'9': Reem_Kufi
'10': Noto_Naskh_Arabic
'11': Reem_Kufi_Ink
'12': Tajawal
'13': Aref_Ruqaa_Ink
'14': Markazi_Text
'15': IBM_Plex_Sans_Arabic
'16': Vazirmatn
'17': Harmattan
'18': Gulzar
'19': Scheherazade_New
'20': Cairo
'21': Amiri_Quran
'22': Noto_Nastaliq_Urdu
'23': Mada
'24': Aref_Ruqaa
'25': Almarai
'26': Alkalami
'27': Qahiri
- name: text
dtype: string
splits:
- name: train
num_bytes: 563851079.0
num_examples: 11344
download_size: 563727207
dataset_size: 563851079.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "IAM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,289 | [
[
-0.05230712890625,
-0.0189361572265625,
0.008697509765625,
0.0201568603515625,
-0.005123138427734375,
-0.00012242794036865234,
0.0248565673828125,
-0.0182037353515625,
0.06805419921875,
0.034210205078125,
-0.06341552734375,
-0.050201416015625,
-0.054718017578125... |
Linaqruf/pixiv-niji-journey | 2023-01-10T03:32:36.000Z | [
"license:agpl-3.0",
"region:us"
] | Linaqruf | null | null | 41 | 8 | 2022-12-27T14:43:38 | ---
license: agpl-3.0
---
## Description
The Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the `gallery-dl` Python package, with the search term "nijijourney" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022.
The dataset is divided into two variants: `raw` and `preprocessed`. The `raw` variant contains the pure dataset resulting from the scraping of Pixiv, while the `preprocessed` variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The `preprocessed` variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images.
The images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in `.txt` and `.caption` format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the `File Structure` section below.
The Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data.
## File Structure
The structure of the raw files is as follows:
```
nijijourney_pixiv_2022110620221222_raw.zip/
├╴nijijourney/
│ ├╴images.png
│ ├╴images.png.json
│ └╴...
```
while the structure of the preprocessed files is:
```
nijijourney_pixiv_2022110620221222_preprocessed.zip/
├╴dataset/
│ ├╴images.png
│ ├╴images.png.json
│ ├╴images.txt
│ ├╴images.caption
│ └╴...
├╴meta_cap.json
├╴meta_dd.json
├╴meta_clean.json
```
## Usage
- Access: the dataset is available for download from the Hugging Face dataset collection
- Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format
- Requirements: the dataset requires no specific requirements or dependencies for use
## Data Quality
- Number of images: 9766
- Image sizes: vary, but all images are in PNG format
- Class balance: the distribution of classes in the dataset is not known
- Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images
## Limitations
While the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include:
- Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset.
- Search term bias: The dataset was collected using the search term "nijijourney" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform.
- Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles.
- Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data.
It is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes.
## License
The Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms.
Under the terms of the AGPL-3.0 license, users are allowed to:
- Use the dataset for any purpose, commercial or non-commercial
- Modify the dataset as needed for their purposes
- Distribute copies of the dataset, either modified or unmodified
However, users must also follow the following conditions:
- Any modified versions of the dataset must be made available under the same AGPL-3.0 license
- If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license
It is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset.
## Citation
If you use this dataset in your work, please cite it as follows:
```
@misc{pixiv_niji_journey,
author = {Linaqruf},
title = {Pixiv Niji Journey},
year = {2022},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Linaqruf/pixiv-niji-journey},
}
```
| 5,338 | [
[
-0.02252197265625,
-0.0149993896484375,
0.00548553466796875,
0.034820556640625,
-0.01641845703125,
-0.004085540771484375,
0.01446533203125,
-0.036468505859375,
0.026153564453125,
0.052337646484375,
-0.0828857421875,
-0.0302886962890625,
-0.033538818359375,
0... |
sdadas/8tags | 2022-12-29T11:40:52.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | sdadas | null | null | 0 | 8 | 2022-12-29T10:19:38 | ---
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-class-classification
pretty_name: 8TAGS
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: film
1: history
2: food
3: medicine
4: motorization
5: work
6: sport
7: technology
splits:
- name: train
- name: validation
- name: test
---
# 8TAGS
### Dataset Summary
A Polish topic classification dataset consisting of headlines from social media posts. It contains about 50,000 sentences annotated with 8 topic labels: film, history, food, medicine, motorization, work, sport and technology. This dataset was created automatically by extracting sentences from headlines and short descriptions of articles posted on Polish social networking site **wykop.pl**. The service allows users to annotate articles with one or more tags (categories). Dataset represents a selection of article sentences from 8 popular categories. The resulting corpus contains cleaned and tokenized, unambiguous sentences (tagged with only one of the selected categories), and longer than 30 characters.
### Data Instances
Example instance:
```
{
"sentence": "Kierowca był nieco zdziwiony że podróżując sporo ponad 200 km / h zatrzymali go policjanci.",
"label": "4"
}
```
### Data Fields
- sentence: sentence text
- label: label identifier corresponding to one of 8 topics
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
| 2,175 | [
[
-0.036346435546875,
-0.0443115234375,
0.03173828125,
0.0231475830078125,
-0.032470703125,
0.006275177001953125,
-0.03399658203125,
-0.021026611328125,
0.0262603759765625,
0.03070068359375,
-0.045806884765625,
-0.062042236328125,
-0.054656982421875,
0.0306396... |
zpn/zinc20 | 2023-01-06T02:03:46.000Z | [
"size_categories:1B<n<10B",
"license:mit",
"bio",
"selfies",
"smiles",
"small_molecules",
"region:us"
] | zpn | This dataset contains ~1B molecules from ZINC20, with their SMILES and SELFIES representations. | @article{Irwin2020,
doi = {10.1021/acs.jcim.0c00675},
url = {https://doi.org/10.1021/acs.jcim.0c00675},
year = {2020},
month = oct,
publisher = {American Chemical Society ({ACS})},
volume = {60},
number = {12},
pages = {6065--6073},
author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},
title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},
journal = {Journal of Chemical Information and Modeling}
} | 4 | 8 | 2023-01-04T17:32:47 | ---
license: mit
dataset_info:
features:
- name: selfies
dtype: string
- name: smiles
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 238295712864
num_examples: 804925861
- name: validation
num_bytes: 26983481360
num_examples: 100642661
- name: test
num_bytes: 29158755632
num_examples: 101082073
download_size: 40061255073
dataset_size: 294437949856
tags:
- bio
- selfies
- smiles
- small_molecules
pretty_name: zinc20
size_categories:
- 1B<n<10B
---
# Dataset Card for Zinc20
## Dataset Description
- **Homepage:** https://zinc20.docking.org/
- **Paper:** https://pubs.acs.org/doi/10.1021/acs.jcim.0c00675
### Dataset Summary
ZINC is a publicly available database that aggregates commercially available and annotated compounds.
ZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.
ZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.
This dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from `smiles` to `seflies` representations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)
### Source Data
#### Initial Data Collection and Normalization
Initial data was released at https://zinc20.docking.org/. We have downloaded and added a `selfies` field and filtered out all molecules that did not contain molecules that could be converted to `selfies` representations.
### Citation Information
@article{Irwin2020,
doi = {10.1021/acs.jcim.0c00675},
url = {https://doi.org/10.1021/acs.jcim.0c00675},
year = {2020},
month = oct,
publisher = {American Chemical Society ({ACS})},
volume = {60},
number = {12},
pages = {6065--6073},
author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},
title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},
journal = {Journal of Chemical Information and Modeling}
}
### Contributions
This dataset was curated and added by [@zanussbaum](https://github.com/zanussbaum).
| 2,575 | [
[
-0.03228759765625,
0.00533294677734375,
0.039154052734375,
0.0198516845703125,
-0.01349639892578125,
-0.01203155517578125,
-0.0062713623046875,
-0.01629638671875,
0.0258636474609375,
0.0204620361328125,
-0.064697265625,
-0.07135009765625,
-0.00737762451171875,
... |
metaeval/sts-companion | 2023-02-03T08:36:00.000Z | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"sts",
"region:us"
] | metaeval | null | null | 2 | 8 | 2023-01-23T13:34:56 | ---
license: apache-2.0
task_categories:
- sentence-similarity
- text-classification
language:
- en
tags:
- sts
---
https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark
The companion datasets to the STS Benchmark comprise the rest of the English datasets used in the STS tasks organized by us in the context of SemEval between 2012 and 2017.
Authors collated two datasets, one with pairs of sentences related to machine translation evaluation. Another one with the rest of datasets, which can be used for domain adaptation studies.
```bib
@inproceedings{cer-etal-2017-semeval,
title = "{S}em{E}val-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation",
author = "Cer, Daniel and
Diab, Mona and
Agirre, Eneko and
Lopez-Gazpio, I{\~n}igo and
Specia, Lucia",
booktitle = "Proceedings of the 11th International Workshop on Semantic Evaluation ({S}em{E}val-2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S17-2001",
doi = "10.18653/v1/S17-2001",
pages = "1--14",
}
``` | 1,181 | [
[
-0.0112457275390625,
-0.0185546875,
0.031005859375,
0.01029205322265625,
-0.024749755859375,
-0.0011701583862304688,
-0.0274810791015625,
-0.0200347900390625,
0.0070953369140625,
0.037933349609375,
-0.059661865234375,
-0.042327880859375,
-0.047943115234375,
... |
GBaker/MedQA-USMLE-4-options-hf | 2023-01-30T22:57:33.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | GBaker | null | null | 3 | 8 | 2023-01-24T20:32:54 | ---
license: cc-by-sa-4.0
---
Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large)
<h4>Citation information:</h4>
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
} | 640 | [
[
-0.0182342529296875,
-0.0562744140625,
0.048065185546875,
-0.030181884765625,
0.006549835205078125,
-0.044586181640625,
-0.00798797607421875,
-0.03289794921875,
0.038330078125,
0.0545654296875,
-0.04010009765625,
-0.032562255859375,
-0.026336669921875,
0.008... |
gfhayworth/wiki_mini_embed | 2023-01-28T23:40:40.000Z | [
"region:us"
] | gfhayworth | null | null | 0 | 8 | 2023-01-28T21:22:26 | Simple English Wikipedia it has only about 170k articles. We split these articles into paragraphs. wikipedia_filepath = 'simplewiki-2020-11-01.jsonl.gz'
if not os.path.exists(wikipedia_filepath): util.http_get('http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz', wikipedia_filepath)
embedded into vectors using SentenceTransformer('multi-qa-MiniLM-L6-cos-v1') | 368 | [
[
-0.0499267578125,
-0.06689453125,
0.037384033203125,
0.027130126953125,
-0.03155517578125,
-0.0172271728515625,
-0.0194091796875,
-0.0367431640625,
0.032562255859375,
0.0169219970703125,
-0.0567626953125,
-0.011505126953125,
-0.02288818359375,
0.061340332031... |
relbert/conceptnet | 2023-03-31T10:34:46.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [ConceptNet with high confidence](https://home.ttic.edu/~kgimpel/commonsense.html) | @inproceedings{li-16,
title = {Commonsense Knowledge Base Completion},
author = {Xiang Li and Aynaz Taheri and Lifu Tu and Kevin Gimpel},
booktitle = {Proc. of ACL},
year = {2016}
}
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
} | 1 | 8 | 2023-01-30T21:16:07 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: relbert/conceptnet
---
# Dataset Card for "relbert/conceptnet"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
- **Dataset:** High Confidence Subset of ConceptNet for link prediction
### Dataset Summary
The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html).
We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
- Number of instances
| | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| number of pairs | 583082 | 1184 | 1187 |
| number of unique relation types | 28 | 20 | 19 |
- Number of pairs in each relation type
| | number of pairs (train) | number of pairs (validation) | number of pairs (test) |
|:-----------------|--------------------------:|-------------------------------:|-------------------------:|
| AtLocation | 69838 | 230 | 250 |
| CapableOf | 71840 | 124 | 144 |
| Causes | 34732 | 52 | 45 |
| CausesDesire | 9616 | 15 | 5 |
| CreatedBy | 534 | 1 | 2 |
| DefinedAs | 11048 | 2 | 1 |
| DesireOf | 28 | 0 | 0 |
| Desires | 8960 | 20 | 8 |
| HasA | 19234 | 43 | 41 |
| HasFirstSubevent | 7350 | 2 | 1 |
| HasLastSubevent | 5916 | 5 | 0 |
| HasPainCharacter | 2 | 0 | 0 |
| HasPainIntensity | 2 | 0 | 0 |
| HasPrerequisite | 47298 | 116 | 109 |
| HasProperty | 36610 | 63 | 70 |
| HasSubevent | 52468 | 82 | 83 |
| InheritsFrom | 112 | 0 | 0 |
| InstanceOf | 138 | 0 | 0 |
| IsA | 71034 | 197 | 211 |
| LocatedNear | 6 | 0 | 0 |
| LocationOfAction | 6 | 0 | 0 |
| MadeOf | 1518 | 10 | 14 |
| MotivatedByGoal | 23668 | 17 | 8 |
| PartOf | 5402 | 19 | 22 |
| ReceivesAction | 20656 | 15 | 11 |
| RelatedTo | 178 | 0 | 1 |
| SymbolOf | 328 | 2 | 0 |
| UsedFor | 84560 | 169 | 161 |
## Dataset Structure
An example of `train` looks as follows.
```shell
{
"relation": "IsA",
"head": "baseball",
"tail": "sport"
}
```
## Citation Information
```
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
}
``` | 5,056 | [
[
-0.039031982421875,
-0.02587890625,
0.01085662841796875,
0.00620269775390625,
-0.00254058837890625,
-0.014984130859375,
-0.007297515869140625,
-0.0242156982421875,
0.043792724609375,
0.0173797607421875,
-0.05291748046875,
-0.05035400390625,
-0.038818359375,
... |
fabraz/writingPromptAug | 2023-10-14T19:48:35.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"writingPrompt",
"arxiv:1805.04833",
"region:us"
] | fabraz | null | null | 0 | 8 | 2023-02-03T11:53:02 | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- writingPrompt
dataset_info:
- config_name: 0.0.1
features:
- name: splitLineIndex
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_examples: 13232
- name: valid
num_examples: 5168
- name: test
num_examples: 5700
---
# Dataset Card for Writing Prompt Augmentation Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts)
- **Leaderboard:**
- **Point of Contact:** [Fabricio Braz](mailto:fabraz@unb.br)
### Dataset Summary
Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts), for feeding [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
### Languages
English
## Dataset Structure
### Data Instances
```javascript
{"splitLineIndex":36888,
"text":"User: write me a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused. -> Rosey: Sure, here\'s a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused.:\\nMy bedroom was dark , but the chatter of distant birds outside of the heavily curtained window suggested a new day was dawning . I stayed in bed , one with the shadows , and stared at the indistinct blackness that filled the ceiling . A trickle of sickly grey light would soon spring from the edges of curtain - the heavy fabric could only do so much to dam the brightness of day . \\n \\n But , for a few more minutes , it was still dark . \\n \\n And I was staring at it - staring at the darkness . Examining it . \\n \\n What was I looking for ? Why , answers of course . Why does anyone stare into the darkness of a bedroom ceiling ? I was seeking answers . \\n \\n Why do it ? I asked myself . Why go to work ? You \'re good at your job when you can be bothered to do it , but how often does that happen ? How often do you really put the effort in ? \\n \\n Can you even remember enjoying it ? \\n \\n Can you remember when you were happy ? \\n \\n I had been too deep in my hunt for answers to notice that the homogenous darkness had given way to a bluish grey world of shapes and objects . My feet swung out of bed and I sat up in the early morning coldness . \\n \\n When *was* I happy last ? \\n \\n I stood up and started my day . \\n \\n * * * \\n \\n The kitchen was filling with light , the muted greys and blues of morning had arrived first , but each minute that passed promised the arrival of the full colours of day . \\n \\n The spoon clinked in the bowl as I scooped up some cereal . I wore only what I had to bed : boxer shirts and a t-shirt . The winter cold does n\'t bother you when you \'ve stopped caring . \\n \\n *When* was I happy ? \\n \\n The question was echoing in my head . A great puzzle . A mystery of the ages . \\n \\n I gulped the last of my morning coffee and went to the bathroom . \\n \\n * * * \\n \\n The plug hole held no answers , no matter how long I stared . \\n \\n How long had I been staring ? \\n \\n I turned the shower off and stepped out into the sterile tiled whiteness . A lifetime of habits drew me to the basin and , without thought , I started to brush my teeth . My mind was still locked , frozen , on the question . \\n \\n When was I happy ? \\n \\n As I wondered , day continued it \'s steady march outside . \\n \\n The bathroom was clean and white , morning light filtered in through a frosted window . The birds were loud now , but I could hardly hear them over the whir of the steam sucking fan above me . \\n \\n Day had officially arrived . \\n \\n Perhaps I am asking myself the wrong question , I thought . \\n \\n The man in the mirror bared his teeth and scrubbed some more , white foam dripped in blobs about the basin . \\n \\n *What* makes me happy ? \\n \\n * * * \\n \\n I had slipped into my work clothes : business shirt , dress pants , leather shoes . My prisoners garb . As I pulled the items on they weighed me down , each a colossal burden . At least I did n\'t wear a tie any more . \\n \\n I had given up on ties , and the rest of my uniform wore the scars of neglect : the shirt was unironed , the pants were thin at the knees and the stitching had come loose at the bottoms , the shoes were beaten , scratched , the soles and tops barely held their bond . \\n \\n This is the business attire of a man who has stopped caring . \\n \\n No one at work seemed to mind . \\n \\n I walked to the front door of my house , shuffling without enthusiasm , without joy for the new day that lay on the other side . \\n \\n I grabbed the handle . \\n \\n What makes me happy ? \\n \\n * * * \\n \\n Another request , another complaint , and my list of work grew longer . It only ever grew longer these days . I had important calls to make , issues to resolve , reports to write - but all I did , for the most part , was stare . \\n \\n Stare at my screen . At my hands . At nothing . \\n \\n The questions I had been asking in the darkness and through-out my house during my morning preparations were not new . I had been thinking on them for a while . I did not know for how long . \\n \\n Weeks ? No . Months . \\n \\n Still no answers . \\n \\n What I do know is : I am *not* happy . \\n \\n The boss leaned on my cubicle wall and asked me where an urgent report , a report that had been urgent for weeks , was up to . The bullshit I served sated his questions and as he walked away I sighed and stared at my screen . \\n \\n To my surprise the report was there . I had been working on it absent-mindedly . Try as I might I still did my job , at least to a degree . \\n \\n Manager for a division of one . Writer of reports and promiser of game changing applications . Mr IT . \\n \\n Well ... at one time I had been Mr IT . Once , when I had been passionate , had had a fire in my belly that churned the engine of my rising star . A career in IT . I had wanted this . \\n \\n Had n\'t I ? \\n \\n Then , why are n\'t I happy ? \\n \\n Because , you did n\'t want this . You never did . You stepped out of high school and fell into it . You \'re good with computers - at least , you were - but they never made you happy . You liked the challenge , sure , but you did it because you had to pay the bills and you had to leave your parents house at some point . \\n \\n Then it was a matter of you being lazy and gutless . Work is a hard habit to break , especially when people keep throwing money at you . You \'d just go in , day after day . Week after week . Month after ... \\n \\n School was almost a decade away and you have n\'t done half of what you wanted . Remember writing ? You were going to write , remember ? You \'ve done some shorts over the years , but you wanted more . You wanted to type those two words . After months and months , you \'d type those two words and you \'d have accomplished sonething . The End . And your book would be done - who cares if it got published . Who cares if no one but you ever saw it . \\n \\n You \'d have written something . You \'d have accomplished something . \\n \\n You \'d be ... \\n \\n And there it is . The answer . \\n \\n Ten years of wasted time - ten years of excuses and meeting other people \'s expectations . Ten years of syaing you \'ll get around to it . \\n \\n Ten years of regret . \\n \\n The report was done . So was I . \\n \\n How do I do this ? Do I walk in and hand in the report and a resignation . No . I ca n\'t do that . These people have been good to me . I need to finish up some of the jobs . Need to get them ready for my abscence . \\n \\n Or am I making excuses ? \\n \\n My screen and my work came into focus . I knew what I needed to do , could feel , almost by instinct , what job \'s were my biggest priorities . A spark lit in my gut and passion trickled through my veins . \\n \\n I was n\'t turning back into Mr IT - could in fact , never be that man again . \\n \\n But I knew what made me happy . Knew how to get there ... \\n \\n ... and could feel it there , just on my horizon ."}
```
### Data Fields
* splitLineIndex: refers to the index line of the data source.
* text: refers to the actual prompt/story text
### Data Splits
|split|samples|
|--|--
|train| 13232|
|valid|5168|
|test| 5700|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
As mentioned, this dataset is an extension of FAIR writing prompt dataset. The steps employed to create the dataset are in the jupyter notebook at files.
#### Who are the source language producers?
FAIR
### Personal and Sensitive Information
The data comes with NSFW samples. Be aware!
## Additional Information
### Licensing Information
Writing Prompt Augmentation Dataset is licensed under MIT.
### Citation Information
Use to generate consistent stories by Hierarchical Neural Story Generation (Fan et al., 2018) https://arxiv.org/abs/1805.04833
### Contributions
Thanks to Huu Nguyen (gh:ontocord)! | 9,594 | [
[
-0.03497314453125,
-0.06622314453125,
0.055419921875,
0.0151824951171875,
-0.0225830078125,
-0.00026798248291015625,
0.01265716552734375,
-0.03411865234375,
0.052947998046875,
0.0396728515625,
-0.0772705078125,
-0.017242431640625,
-0.0272674560546875,
0.0309... |
Zenodia/dreambooth-mooncake | 2023-02-06T16:20:09.000Z | [
"region:us"
] | Zenodia | null | null | 0 | 8 | 2023-02-06T16:20:02 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 7535176.0
num_examples: 15
download_size: 7499175
dataset_size: 7535176.0
---
# Dataset Card for "dreambooth-mooncake"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 363 | [
[
-0.03497314453125,
-0.0099029541015625,
0.00901031494140625,
0.02423095703125,
-0.0128326416015625,
0.020233154296875,
0.027435302734375,
-0.00811004638671875,
0.08563232421875,
0.052398681640625,
-0.058349609375,
-0.040802001953125,
-0.032012939453125,
-0.0... |
fathyshalab/massive_weather | 2023-02-08T12:26:11.000Z | [
"region:us"
] | fathyshalab | null | null | 0 | 8 | 2023-02-08T11:16:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30514
num_examples: 573
- name: validation
num_bytes: 6972
num_examples: 126
- name: test
num_bytes: 8504
num_examples: 156
download_size: 25707
dataset_size: 45990
---
# Dataset Card for "massive_weather"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.041473388671875,
-0.035614013671875,
0.026519775390625,
0.03875732421875,
-0.0197906494140625,
-0.005672454833984375,
0.0007123947143554688,
-0.01390838623046875,
0.052581787109375,
0.03131103515625,
-0.055023193359375,
-0.05462646484375,
-0.040191650390625,
... |
tiagoseca/raw_true_labels | 2023-02-27T11:36:55.000Z | [
"region:us"
] | tiagoseca | null | null | 0 | 8 | 2023-02-08T14:08:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Kaludi/food-category-classification-v2.0 | 2023-02-09T19:38:17.000Z | [
"task_categories:image-classification",
"region:us"
] | Kaludi | null | null | 0 | 8 | 2023-02-08T19:46:45 | ---
task_categories:
- image-classification
---
# Dataset for project: food-category-classification-v2.0
## Dataset Description
This dataset for project food-category-classification-v2.0 was scraped with the help of a bulk google image downloader.
## Dataset Structure
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Bread', 'Dairy', 'Dessert', 'Egg', 'Fried Food', 'Fruit', 'Meat', 'Noodles', 'Rice', 'Seafood', 'Soup', 'Vegetable'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
| 812 | [
[
-0.0262603759765625,
-0.004795074462890625,
-0.0033416748046875,
-0.004634857177734375,
-0.01506805419921875,
0.01080322265625,
0.0006608963012695312,
-0.0232391357421875,
0.021026611328125,
0.03131103515625,
-0.0184173583984375,
-0.055877685546875,
-0.055023193... |
karukas/pubmed-abstract-matching | 2023-02-09T21:18:46.000Z | [
"region:us"
] | karukas | null | null | 0 | 8 | 2023-02-09T21:18:08 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: train
num_bytes: 2237510856
num_examples: 119924
- name: validation
num_bytes: 126574623
num_examples: 6633
- name: test
num_bytes: 126357120
num_examples: 6658
download_size: 1156008015
dataset_size: 2490442599
---
# Dataset Card for "pubmed-abstract-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.0253143310546875,
0.0072784423828125,
0.048736572265625,
0.01273345947265625,
-0.02703857421875,
-0.006107330322265625,
0.031646728515625,
-0.0245513916015625,
0.0546875,
0.03533935546875,
-0.041961669921875,
-0.059234619140625,
-0.03790283203125,
0.01203... |
jonathan-roberts1/GID | 2023-03-31T15:38:31.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 8 | 2023-02-15T16:42:03 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': arbor woodland
'1': artificial grassland
'2': dry cropland
'3': garden plot
'4': industrial land
'5': irrigated land
'6': lake
'7': natural grassland
'8': paddy field
'9': pond
'10': river
'11': rural residential
'12': shrub land
'13': traffic land
'14': urban residential
splits:
- name: train
num_bytes: 1777210275
num_examples: 30000
download_size: 1263253291
dataset_size: 1777210275
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "GID"
## Dataset Description
- **Paper** [Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
### Licensing Information
Public domain.
## Citation Information
[Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
```
@article{GID2020,
title = {Land-cover classification with high-resolution remote sensing images using transferable deep models},
author = {Tong, Xin-Yi and Xia, Gui-Song and Lu, Qikai and Shen, Huanfeng and Li, Shengyang and You, Shucheng and Zhang, Liangpei},
year = 2020,
journal = {Remote Sensing of Environment},
volume = 237,
pages = 111322
}
``` | 1,655 | [
[
-0.038604736328125,
-0.0198516845703125,
0.00688934326171875,
-0.00948333740234375,
-0.02264404296875,
-0.0003910064697265625,
-0.0037822723388671875,
-0.022979736328125,
-0.01187896728515625,
0.04754638671875,
-0.02288818359375,
-0.058349609375,
-0.054840087890... |
jonathan-roberts1/CLRS | 2023-03-31T15:35:22.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 8 | 2023-02-15T16:46:17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airport
'1': bare land
'2': beach
'3': bridge
'4': commercial
'5': desert
'6': farmland
'7': forest
'8': golf course
'9': highway
'10': industrial
'11': meadow
'12': mountain
'13': overpass
'14': park
'15': parking
'16': playground
'17': port
'18': railway
'19': railway station
'20': residential
'21': river
'22': runway
'23': stadium
'24': storage tank
splits:
- name: train
num_bytes: 2969926932
num_examples: 15000
download_size: 2327956775
dataset_size: 2969926932
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "CLRS"
## Dataset Description
- **Paper** [CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
```
@article{s20041226,
title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification},
author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao},
year = 2020,
journal = {Sensors},
volume = 20,
number = 4,
doi = {10.3390/s20041226},
issn = {1424-8220},
url = {https://www.mdpi.com/1424-8220/20/4/1226},
article-number = 1226,
pubmedid = 32102294,
}
``` | 1,879 | [
[
-0.02899169921875,
-0.00067901611328125,
0.019256591796875,
0.0034198760986328125,
-0.03033447265625,
-0.0119171142578125,
-0.00450897216796875,
-0.03143310546875,
-0.053009033203125,
0.025909423828125,
-0.038848876953125,
-0.05023193359375,
-0.020660400390625,
... |
jonathan-roberts1/Optimal-31 | 2023-03-31T17:06:29.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 8 | 2023-02-17T15:53:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': baseball diamond
'3': basketball court
'4': beach
'5': bridge
'6': chaparral
'7': church
'8': circular farmland
'9': commercial area
'10': dense residential
'11': desert
'12': forest
'13': freeway
'14': golf course
'15': ground track field
'16': harbor
'17': industrial area
'18': intersection
'19': island
'20': lake
'21': meadow
'22': medium residential
'23': mobile home park
'24': mountain
'25': overpass
'26': parking lot
'27': railway
'28': rectangular farmland
'29': roundabout
'30': runway
splits:
- name: train
num_bytes: 25100636.72
num_examples: 1860
download_size: 25105452
dataset_size: 25100636.72
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "Optimal-31"
## Dataset Description
- **Paper** [Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
### Licensing Information
[No license for now, cite the paper below.]
## Citation Information
[Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
```
@article{wang2018scene,
title = {Scene classification with recurrent attention of VHR remote sensing images},
author = {Wang, Qi and Liu, Shaoteng and Chanussot, Jocelyn and Li, Xuelong},
year = 2018,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 57,
number = 2,
pages = {1155--1167}
}
``` | 2,026 | [
[
-0.04345703125,
-0.01132965087890625,
0.0183868408203125,
0.007781982421875,
-0.038055419921875,
-0.0172576904296875,
0.00728607177734375,
-0.031524658203125,
-0.026824951171875,
0.02117919921875,
-0.0455322265625,
-0.04345703125,
-0.0173797607421875,
0.0091... |
svjack/context-dialogue-generate-ds-zh-v1 | 2023-02-21T07:59:42.000Z | [
"region:us"
] | svjack | null | null | 0 | 8 | 2023-02-21T07:28:37 | ---
dataset_info:
features:
- name: sent
dtype: string
- name: dialogue
sequence: string
- name: L_emb
sequence: float32
splits:
- name: train
num_bytes: 74417088
num_examples: 20000
download_size: 82191201
dataset_size: 74417088
---
# Dataset Card for "context-dialogue-generate-ds-zh-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 458 | [
[
-0.04925537109375,
-0.034942626953125,
0.0369873046875,
0.0030918121337890625,
-0.02691650390625,
-0.0260162353515625,
0.0157623291015625,
-0.00020301342010498047,
0.061309814453125,
0.04241943359375,
-0.105712890625,
-0.055419921875,
-0.0272674560546875,
-0... |
vietgpt/mfag_vi | 2023-07-04T05:22:16.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:vi",
"LM",
"region:us"
] | vietgpt | null | null | 0 | 8 | 2023-02-22T18:21:19 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 10489370
num_examples: 26494
- name: validation
num_bytes: 329184
num_examples: 663
download_size: 3481712
dataset_size: 10818554
task_categories:
- question-answering
language:
- vi
tags:
- LM
size_categories:
- 10K<n<100K
---
# MFAQ
- Source: https://huggingface.co/datasets/clips/mfaq
- Num examples:
- 26,494 (train)
- 663 (validation)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/mfag_vi")
```
- Format for QA task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>",
):
question = sample['question']
completion = sample['answer']
return {'text': """Dưới đây là một hướng dẫn mô tả một nhiệm vụ. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
{instruction_key}
{question}
{response_key}
{completion}
{end_key}""".format(
instruction_key=instruction_key,
question=question,
response_key=response_key,
completion=completion,
end_key=end_key,
)}
"""
Dưới đây là một hướng dẫn mô tả một nhiệm vụ. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
### Instruction:
Bao lâu tôi nên rửa tóc giả tổng hợp?
<|endofprompt|>
Tóc giả tổng hợp được làm từ sợi nhựa nên cần được chăm sóc cẩn thận. Tóc giả tổng hợp làm giảm chất lượng của chúng mỗi lần gội; do đó luôn luôn mặc chúng cẩn thận để giảm việc giặt giũ. Tần suất giặt tóc giả tổng hợp phụ thuộc vào các yếu tố sau. Nếu bạn muốn tóc giả của mình bền lâu hơn; sau đó phát triển các thực hành tốt sau đây để giữ cho tóc giả của bạn luôn mới trong hơn một năm. Làm thế nào để ngăn tóc giả tổng hợp bị bẩn? Tùy thuộc vào một số yếu tố, tóc giả tổng hợp cần giặt sau 18-20 lần mặc nhưng điều này có thể khác nhau ở mỗi người. Thực hành vệ sinh để giữ cho tóc giả của bạn trông như mới. Bạn có thể cần phải gội đầu thường xuyên tóc giả tổng hợp nếu: Bạn sống ở một đất nước có khí hậu ẩm ướt Bạn mang nhiều sản phẩm tạo kiểu tóc cồng kềnh như mousses, xịt và gel Da đầu của bạn nhờn và tiết dầu & mảnh vụn. Sự tích tụ của bụi bẩn, dầu trên da đầu và các sản phẩm tạo kiểu tóc khiến tóc giả của bạn nhờn và bẩn. Nếu không giặt đúng giờ; Tóc giả tổng hợp sẽ bị hư hỏng vĩnh viễn bao gồm trở nên thiếu bóng và thô. Bạn nên đội mũ tóc giả bên dưới tóc giả tổng hợp vì nó không chỉ giúp bạn giữ tóc giả ở da đầu một cách chắc chắn mà còn đóng vai trò như một lớp bảo vệ chống lại chất tiết ra. từ da đầu của bạn đến tóc giả. Giặt tóc giả tổng hợp có thể hoàn toàn là sở thích cá nhân nhưng bạn chắc chắn có thể hạn chế quá trình tẻ nhạt bằng cách đội chúng cẩn thận.
<|endoftext|>
"""
``` | 2,784 | [
[
-0.032989501953125,
-0.039703369140625,
0.01491546630859375,
0.036468505859375,
-0.01555633544921875,
-0.0005793571472167969,
0.00873565673828125,
-0.0211944580078125,
0.03704833984375,
0.04345703125,
-0.0372314453125,
-0.03875732421875,
-0.035552978515625,
... |
lansinuote/diffusion.4.text_to_image | 2023-04-07T08:48:17.000Z | [
"region:us"
] | lansinuote | null | null | 0 | 8 | 2023-02-24T10:14:17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 119636585.0
num_examples: 833
download_size: 0
dataset_size: 119636585.0
---
# Dataset Card for "diffusion.4.text_to_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 408 | [
[
-0.03997802734375,
-0.048492431640625,
0.03948974609375,
0.0345458984375,
-0.01457977294921875,
-0.0139312744140625,
0.0193023681640625,
-0.012908935546875,
0.044952392578125,
0.0289459228515625,
-0.0350341796875,
-0.0589599609375,
-0.0643310546875,
-0.02235... |
openclimatefix/ecmwf-cams-forecast | 2023-05-08T20:33:24.000Z | [
"license:mit",
"doi:10.57967/hf/0886",
"region:us"
] | openclimatefix | null | null | 2 | 8 | 2023-03-02T15:15:30 | ---
license: mit
---
# Dataset Card for ECMWF CAMS Forecast
## Dataset Description
- **Homepage: https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-europe-air-quality-forecasts?tab=overview
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: jacob@openclimatefix.org
### Dataset Summary
This is a dataset of converted ECMWF CAMS Air Quality forecasts over Europe on a 0.1x0.1 degree grid. The data is available on a 3-year rolling archive, so this repo is attempting to keep more of that data public.
The data has been converted to Zarr, and only the height levels of 0m,50m, 250m,500m,1000m,2000m,3000m,and 5000m have been kept.
Additionally, the forecasts go out to 96 hours from ECMWF, but this dataset only contains forecasts up to 48 hours into the future, as it is more focused on being useful for short-term solar forecasting over the next 48 hours, and to reduce file size.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
Each day is a Zarr containing 13 different aerosols on the 6 height levels, going out 48 hourly-timesteps to the future, from midnight on that day. These can be opened with Zarr, and have been chunked into quarters spatially (along latitude and longitude),
and in a single chunk temporally and height-wise. In other words, each variable has 4 chunks. No data has been modified or changed from the original values.
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 2,459 | [
[
-0.03961181640625,
-0.0173492431640625,
0.023284912109375,
0.01523590087890625,
-0.02398681640625,
-0.026611328125,
-0.01552581787109375,
-0.03729248046875,
0.01404571533203125,
0.03717041015625,
-0.07720947265625,
-0.0576171875,
-0.024658203125,
0.002611160... |
ashwathjadhav23/conill2003_filtered_entities | 2023-03-05T07:24:27.000Z | [
"region:us"
] | ashwathjadhav23 | null | null | 0 | 8 | 2023-03-05T07:20:17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 680716.5731785485
num_examples: 2684
- name: validation
num_bytes: 891431
num_examples: 3250
- name: test
num_bytes: 811470
num_examples: 3453
download_size: 643826
dataset_size: 2383617.5731785484
---
# Dataset Card for "conill2003_filtered_entities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 806 | [
[
-0.05120849609375,
-0.020751953125,
0.0250244140625,
0.0185699462890625,
-0.0186309814453125,
-0.0119781494140625,
0.0173492431640625,
-0.026763916015625,
0.052398681640625,
0.0634765625,
-0.06829833984375,
-0.05572509765625,
-0.035430908203125,
-0.002368927... |
jbrazzy/baby_names | 2023-03-06T00:45:44.000Z | [
"region:us"
] | jbrazzy | null | null | 0 | 8 | 2023-03-06T00:45:34 | ---
dataset_info:
features:
- name: Names
dtype: string
- name: Sex
dtype: string
- name: Count
dtype: int64
- name: Year
dtype: int64
splits:
- name: train
num_bytes: 33860482
num_examples: 1084385
- name: test
num_bytes: 8482889
num_examples: 271663
download_size: 13301020
dataset_size: 42343371
---
# Dataset Card for "baby_names"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 519 | [
[
-0.03399658203125,
-0.00774383544921875,
0.0010004043579101562,
0.0248565673828125,
-0.023223876953125,
-0.005786895751953125,
0.017425537109375,
-0.00991058349609375,
0.050079345703125,
0.0233154296875,
-0.064697265625,
-0.045989990234375,
-0.052734375,
-0.... |
pnadel/latin_sentences | 2023-03-07T16:08:13.000Z | [
"region:us"
] | pnadel | null | null | 0 | 8 | 2023-03-07T15:48:22 | ---
dataset_info:
features:
- name: f_name
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 39199112.23995617
num_examples: 170421
- name: test
num_bytes: 13066600.760043832
num_examples: 56808
download_size: 25166966
dataset_size: 52265713.0
---
# Dataset Card for "latin_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 550 | [
[
-0.01934814453125,
-0.03173828125,
0.0206146240234375,
0.033050537109375,
-0.0167236328125,
-0.0027713775634765625,
-0.00830078125,
-0.013031005859375,
0.057769775390625,
0.0316162109375,
-0.05615234375,
-0.063232421875,
-0.0379638671875,
0.00616836547851562... |
physionet/mimic-iv-demo | 2023-03-10T21:25:43.000Z | [
"license:odbl",
"region:us"
] | physionet | null | null | 2 | 8 | 2023-03-10T20:36:28 | ---
license: odbl
---
# MIMIC-IV Clinical Database Demo
The Medical Information Mart for Intensive Care (MIMIC)-IV database is comprised
of deidentified electronic health records for patients admitted to the Beth Israel
Deaconess Medical Center. Access to MIMIC-IV is limited to credentialed users.
Here, we have provided an openly-available demo of MIMIC-IV containing a subset
of 100 patients. The dataset includes similar content to MIMIC-IV, but excludes
free-text clinical notes. The demo may be useful for running workshops and for
assessing whether the MIMIC-IV is appropriate for a study before making an access
request.
For details on the data, see the MIMIC-IV project on PhysioNet:
https://doi.org/10.13026/07hj-2a80
The contents of this project also contain an additional file:
demo_subject_id.csv
This is a CSV file containing the subject_id used to filter MIMIC-IV. Only
these subject_id are available in the demo.
| 934 | [
[
-0.01383209228515625,
-0.0188751220703125,
0.049957275390625,
0.020294189453125,
-0.0099334716796875,
0.0041046142578125,
0.00952911376953125,
-0.0309600830078125,
0.025848388671875,
0.04803466796875,
-0.07275390625,
-0.060638427734375,
-0.011810302734375,
0... |
society-ethics/papers | 2023-05-31T13:53:19.000Z | [
"ethics",
"arxiv:1906.02569",
"arxiv:1910.01108",
"arxiv:2109.14076",
"arxiv:2205.02894",
"arxiv:2206.03216",
"arxiv:2103.12028",
"arxiv:2111.04424",
"arxiv:2208.11695",
"arxiv:2212.05129",
"arxiv:2205.12586",
"arxiv:2210.05839",
"arxiv:2110.08207",
"arxiv:2211.05100",
"arxiv:2303.03915"... | society-ethics | null | null | 7 | 8 | 2023-03-13T20:07:35 | ---
tags:
- ethics
---
# Hugging Face Ethics & Society Papers
This is an incomplete list of ethics-related papers published by researchers at Hugging Face.
- Gradio: https://arxiv.org/abs/1906.02569
- DistilBERT: https://arxiv.org/abs/1910.01108
- RAFT: https://arxiv.org/abs/2109.14076
- Interactive Model Cards: https://arxiv.org/abs/2205.02894
- Data Governance in the Age of Large-Scale Data-Driven Language Technology: https://arxiv.org/abs/2206.03216
- Quality at a Glance: https://arxiv.org/abs/2103.12028
- A Framework for Deprecating Datasets: https://arxiv.org/abs/2111.04424
- Bugs in the Data: https://arxiv.org/abs/2208.11695
- Measuring Data: https://arxiv.org/abs/2212.05129
- Perturbation Augmentation for Fairer NLP: https://arxiv.org/abs/2205.12586
- SEAL: https://arxiv.org/abs/2210.05839
- Multitask Prompted Training Enables Zero-Shot Task Generalization: https://arxiv.org/abs/2110.08207
- BLOOM: https://arxiv.org/abs/2211.05100
- ROOTS: https://arxiv.org/abs/2303.03915
- Evaluate & Evaluation on the Hub: https://arxiv.org/abs/2210.01970
- Spacerini: https://arxiv.org/abs/2302.14534
- ROOTS Search Tool: https://arxiv.org/abs/2302.14035
- Fair Diffusion: https://arxiv.org/abs/2302.10893
- Counting Carbon: https://arxiv.org/abs/2302.08476
- The Gradient of Generative AI Release: https://arxiv.org/abs/2302.04844
- BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model: https://arxiv.org/abs/2212.04960
- Towards Openness Beyond Open Access: User Journeys through 3 Open AI Collaboratives: https://arxiv.org/abs/2301.08488
- Stable Bias: Analyzing Societal Representations in Diffusion Models: https://arxiv.org/abs/2303.11408
- Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML: https://arxiv.org/abs/2305.18615 | 1,835 | [
[
-0.041290283203125,
-0.0394287109375,
0.037017822265625,
0.0189666748046875,
0.01141357421875,
-0.0031185150146484375,
-0.003040313720703125,
-0.058807373046875,
0.0364990234375,
0.031768798828125,
-0.0221710205078125,
-0.0233001708984375,
-0.06353759765625,
... |
ZurichNLP/swissner | 2023-03-24T08:37:30.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:n<1K",
"language:de",
"language:fr",
"language:it",
"language:rm",
"license:cc-by-4.0",
"arxiv:2303.13310",
"region:us"
] | ZurichNLP | null | null | 1 | 8 | 2023-03-20T17:25:08 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
- name: url
dtype: string
splits:
- name: test_de
num_bytes: 164433
num_examples: 200
- name: test_fr
num_bytes: 186036
num_examples: 200
- name: test_it
num_bytes: 197513
num_examples: 200
- name: test_rm
num_bytes: 206644
num_examples: 200
download_size: 220352
dataset_size: 754626
license: cc-by-4.0
task_categories:
- token-classification
task_ids:
- named-entity-recognition
language:
- de
- fr
- it
- rm
multilinguality:
- multilingual
pretty_name: SwissNER
size_categories:
- n<1K
---
# SwissNER
A multilingual test set for named entity recognition (NER) on Swiss news articles.
## Description
SwissNER is a dataset for named entity recognition based on manually annotated news articles in Swiss Standard German, French, Italian, and Romansh Grischun.
We have manually annotated a selection of articles that have been published in February 2023 in the categories "Switzerland" or "Regional" on the following online news portals:
- Swiss Standard German: [srf.ch](https://www.srf.ch/)
- French: [rts.ch](https://www.rts.ch/)
- Italian: [rsi.ch](https://www.rsi.ch/)
- Romansh Grischun: [rtr.ch](https://www.rtr.ch/)
For each article we extracted the first two paragraphs after the lead paragraph.
We followed the guidelines of the CoNLL-2002 and 2003 shared tasks and annotated the names of persons, organizations, locations and miscellaneous entities.
The annotation was performed by a single annotator.
## License
- Text paragraphs: © Swiss Broadcasting Corporation (SRG SSR)
- Annotations: Attribution 4.0 International (CC BY 4.0)
## Statistics
| | DE | FR | IT | RM | Total |
|----------------------|-----:|------:|------:|------:|------:|
| Number of paragraphs | 200 | 200 | 200 | 200 | 800 |
| Number of tokens | 9498 | 11434 | 12423 | 13356 | 46711 |
| Number of entities | 479 | 475 | 556 | 591 | 2101 |
| – `PER` | 104 | 92 | 93 | 118 | 407 |
| – `ORG` | 193 | 216 | 266 | 227 | 902 |
| – `LOC` | 182 | 167 | 197 | 246 | 792 |
| – `MISC` | 113 | 79 | 88 | 39 | 319 |
## Citation
```bibtex
@article{vamvas-etal-2023-swissbert,
title={Swiss{BERT}: The Multilingual Language Model for Switzerland},
author={Jannis Vamvas and Johannes Gra\"en and Rico Sennrich},
year={2023},
eprint={2303.13310},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2303.13310}
}
``` | 2,662 | [
[
-0.049407958984375,
-0.050811767578125,
0.00811767578125,
0.0166473388671875,
-0.0222015380859375,
-0.0016918182373046875,
-0.01446533203125,
-0.04193115234375,
0.031494140625,
0.03240966796875,
-0.05633544921875,
-0.04522705078125,
-0.043121337890625,
0.042... |
mstz/bank | 2023-04-15T11:16:43.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"compas",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | 0 | 8 | 2023-03-23T00:56:08 | ---
language:
- en
tags:
- compas
- tabular_classification
- binary_classification
- UCI
pretty_name: Bank
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- encoding
- subscription
---
# Bank
The [Bank dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Potential clients are contacted by a bank during a second advertisement campaign.
This datasets records the customer, the interaction with the AD campaign, and if they subscribed to a proposed bank plan or not.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| subscription | Binary classification | Has the customer subscribed to a bank plan? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/bank", "subscription")["train"]
```
# Features
| **Name** |**Type** |
|-----------------------------------------------|-----------|
|`age` |`int64` |
|`job` |`string` |
|`marital_status` |`string` |
|`education` |`int8` |
|`has_defaulted` |`int8` |
|`account_balance` |`int64` |
|`has_housing_loan` |`int8` |
|`has_personal_loan` |`int8` |
|`month_of_last_contact` |`string` |
|`number_of_calls_in_ad_campaign` |`string` |
|`days_since_last_contact_of_previous_campaign` |`int16` |
|`number_of_calls_before_this_campaign` |`int16` |
|`successfull_subscription` |`int8` | | 2,133 | [
[
-0.02459716796875,
-0.0299072265625,
0.005828857421875,
0.00827789306640625,
0.00467681884765625,
-0.0142364501953125,
0.0011539459228515625,
-0.01221466064453125,
0.00527191162109375,
0.0660400390625,
-0.050567626953125,
-0.05450439453125,
-0.033050537109375,
... |
open-source-metrics/issues-external | 2023-09-22T17:24:08.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 8 | 2023-03-24T16:20:35 | ---
dataset_info:
features:
- name: dates
dtype: string
- name: type
struct:
- name: authorAssociation
dtype: string
- name: comment
dtype: bool
- name: issue
dtype: bool
splits:
- name: stable_diffusion_webui
num_bytes: 1614011
num_examples: 46481
- name: langchain
num_bytes: 1159174
num_examples: 32311
- name: pytorch
num_bytes: 21278830
num_examples: 562406
- name: tensorflow
num_bytes: 14004829
num_examples: 393443
download_size: 10347881
dataset_size: 38056844
configs:
- config_name: default
data_files:
- split: stable_diffusion_webui
path: data/stable_diffusion_webui-*
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "issues-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 995 | [
[
-0.041900634765625,
-0.015350341796875,
0.02130126953125,
0.0250244140625,
0.004428863525390625,
-0.005710601806640625,
0.005401611328125,
-0.03094482421875,
0.055999755859375,
0.021453857421875,
-0.0697021484375,
-0.031585693359375,
-0.034271240234375,
-0.0... |
open-source-metrics/stars-external | 2023-09-06T22:22:43.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 8 | 2023-03-24T17:21:22 | ---
dataset_info:
features:
- name: login
dtype: string
- name: dates
dtype: string
splits:
- name: stable_diffusion_webui
num_bytes: 3742189
num_examples: 101082
- name: langchain
num_bytes: 2274651
num_examples: 61173
- name: pytorch
num_bytes: 2622990
num_examples: 70474
- name: tensorflow
num_bytes: 6591180
num_examples: 177432
download_size: 8985694
dataset_size: 15231010
configs:
- config_name: default
data_files:
- split: stable_diffusion_webui
path: data/stable_diffusion_webui-*
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "stars-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 874 | [
[
-0.04388427734375,
-0.0109710693359375,
0.021759033203125,
0.0084381103515625,
-0.004730224609375,
-0.0016584396362304688,
0.00496673583984375,
-0.03546142578125,
0.056121826171875,
0.03485107421875,
-0.07012939453125,
-0.034759521484375,
-0.04364013671875,
... |
M-AI-C/quran-en-tafssirs | 2023-04-02T14:06:42.000Z | [
"region:us"
] | M-AI-C | null | null | 0 | 8 | 2023-04-02T14:02:19 | ---
dataset_info:
features:
- name: en-ahmedali
dtype: string
- name: en-ahmedraza
dtype: string
- name: en-arberry
dtype: string
- name: en-asad
dtype: string
- name: en-daryabadi
dtype: string
- name: en-hilali
dtype: string
- name: en-itani
dtype: string
- name: en-maududi
dtype: string
- name: en-mubarakpuri
dtype: string
- name: en-pickthall
dtype: string
- name: en-qarai
dtype: string
- name: en-qaribullah
dtype: string
- name: en-sahih
dtype: string
- name: en-sarwar
dtype: string
- name: en-shakir
dtype: string
- name: en-transliterati
dtype: string
- name: en-wahiduddi
dtype: string
- name: en-yusufali
dtype: string
- name: ayah
dtype: int64
- name: sorah
dtype: string
- name: sentence
dtype: string
- name: en-tafsir-mokhtasar-html
dtype: string
- name: en-tafsir-mokhtasar-text
dtype: string
- name: en-tafsir-maarif-html
dtype: string
- name: en-tafsir-maarif-text
dtype: string
- name: en-tafsir-ibn-kathir-html
dtype: string
- name: en-tafsir-ibn-kathir-text
dtype: string
splits:
- name: train
num_bytes: 66051616
num_examples: 6235
download_size: 35316900
dataset_size: 66051616
---
# Dataset Card for "quran-en-tafssirs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,459 | [
[
-0.043304443359375,
-0.016937255859375,
-0.005092620849609375,
0.0190277099609375,
-0.02642822265625,
0.0009474754333496094,
0.0073089599609375,
-0.01039886474609375,
0.0501708984375,
0.042144775390625,
-0.0467529296875,
-0.06646728515625,
-0.053131103515625,
... |
sanagnos/processed_gpt_dataset_big | 2023-04-06T20:05:27.000Z | [
"region:us"
] | sanagnos | null | null | 0 | 8 | 2023-04-06T19:39:47 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 23584245444.0
num_examples: 3831099
download_size: 6899066299
dataset_size: 23584245444.0
---
# Dataset Card for "processed_gpt_dataset_big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.038421630859375,
-0.036468505859375,
0.039031982421875,
0.016448974609375,
-0.0287628173828125,
-0.004634857177734375,
0.0030345916748046875,
-0.01129913330078125,
0.059112548828125,
0.036773681640625,
-0.051605224609375,
-0.050018310546875,
-0.05392456054687... |
mstz/pima | 2023-04-16T17:57:48.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"pima",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | 0 | 8 | 2023-04-06T22:15:13 | ---
language:
- en
tags:
- pima
- tabular_classification
- binary_classification
- UCI
pretty_name: Ozone
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- pima
license: cc
---
# pima
The [pima dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict diabetes of a patient.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pima | Binary classification | Does the patient have diabetes?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pima")["train"]
``` | 750 | [
[
-0.0193939208984375,
-0.035369873046875,
0.03680419921875,
0.005672454833984375,
-0.0016326904296875,
-0.0265045166015625,
-0.004451751708984375,
-0.00634765625,
0.0159149169921875,
0.046417236328125,
-0.0118560791015625,
-0.0523681640625,
-0.07147216796875,
... |
mstz/planning | 2023-04-16T17:57:54.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"planning",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_planning_relax_230,
author = {Bhatt,Rajen},
title = {{Planning Relax}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5T023}}
} | 0 | 8 | 2023-04-06T22:38:04 | ---
language:
- en
tags:
- planning
- tabular_classification
- binary_classification
- UCI
pretty_name: Planning
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- planning
license: cc
---
# Planning
The [Planning dataset](https://archive.ics.uci.edu/ml/datasets/Planning) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| planning | Binary classification | Is the patient in a planning state?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/planning")["train"]
``` | 766 | [
[
-0.01070404052734375,
-0.004047393798828125,
0.0200347900390625,
0.031494140625,
0.0010328292846679688,
-0.043914794921875,
-0.0005211830139160156,
0.0086517333984375,
0.0303802490234375,
0.043609619140625,
-0.05419921875,
-0.032501220703125,
-0.04327392578125,
... |
mstz/tic_tac_toe | 2023-04-16T18:03:22.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"TicTacToe",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_tic-tac-toe_endgame_101,
author = {Aha,David},
title = {{Tic-Tac-Toe Endgame}},
year = {1991},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5688J}}
} | 0 | 8 | 2023-04-07T08:42:16 | ---
language:
- en
tags:
- TicTacToe
- tabular_classification
- binary_classification
- UCI
pretty_name: TicTacToe
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- tic_tac_toe
license: cc
---
# TicTacToe
The [TicTacToe dataset](https://archive-beta.ics.uci.edu/dataset/101/tic+tac+toe+endgame) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| tic_tac_toe | Binary classification | Does the X player win? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/tic_tac_toe")["train"]
``` | 759 | [
[
-0.02239990234375,
-0.032501220703125,
0.01029205322265625,
0.012481689453125,
-0.01983642578125,
0.00930023193359375,
-0.0208282470703125,
-0.00832366943359375,
0.035186767578125,
0.01953125,
-0.01983642578125,
-0.04010009765625,
-0.057037353515625,
0.00140... |
mstz/twonorm | 2023-04-07T14:58:58.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"twonorm",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | 0 | 8 | 2023-04-07T10:01:07 | ---
language:
- en
tags:
- twonorm
- tabular_classification
- binary_classification
pretty_name: Two Norm
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- 8hr
- 1hr
---
# TwoNorm
The [TwoNorm dataset](https://www.openml.org/search?type=data&status=active&id=1507) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| twonorm | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/twonorm")["train"]
```
| 733 | [
[
-0.01325225830078125,
-0.0058135986328125,
0.01114654541015625,
0.0175628662109375,
-0.0184783935546875,
-0.025848388671875,
-0.0210113525390625,
-0.015411376953125,
-0.0096435546875,
0.04168701171875,
-0.0255279541015625,
-0.045867919921875,
-0.037353515625,
... |
tarasabkar/IEMOCAP_Audio | 2023-04-08T12:21:44.000Z | [
"region:us"
] | tarasabkar | null | null | 1 | 8 | 2023-04-08T11:48:22 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': ang
'1': hap
'2': neu
'3': sad
splits:
- name: session1
num_bytes: 166986293.79
num_examples: 1085
- name: session2
num_bytes: 153330227.792
num_examples: 1023
- name: session3
num_bytes: 167233186.002
num_examples: 1151
- name: session4
num_bytes: 145475815.026
num_examples: 1031
- name: session5
num_bytes: 170322896.742
num_examples: 1241
download_size: 0
dataset_size: 803348419.352
---
# Dataset Card for "IEMOCAP_Audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 820 | [
[
-0.051513671875,
-0.0135040283203125,
0.0027713775634765625,
0.0175933837890625,
0.00238800048828125,
-0.010833740234375,
0.00147247314453125,
-0.0292205810546875,
0.0679931640625,
0.017730712890625,
-0.0662841796875,
-0.052703857421875,
-0.044342041015625,
... |
mstz/phoneme | 2023-04-11T00:14:47.000Z | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"phoneme",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | 0 | 8 | 2023-04-11T00:14:16 | ---
language:
- en
tags:
- phoneme
- tabular_classification
- binary_classification
pretty_name: Phoneme
size_categories:
- 10k<n<100K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- phoneme
---
# Phoneme
The [Phoneme dataset](https://www.openml.org/search?type=data&sort=runs&id=1489&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| phoneme | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/phoneme")["train"]
```
| 742 | [
[
-0.0212249755859375,
-0.003421783447265625,
0.01004791259765625,
0.00836944580078125,
-0.0233001708984375,
-0.0306243896484375,
-0.0298004150390625,
-0.0082244873046875,
-0.0035533905029296875,
0.033294677734375,
-0.021331787109375,
-0.0657958984375,
-0.02366638... |
qbao775/PARARULE-Plus | 2023-06-05T03:56:52.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] | qbao775 | null | null | 4 | 8 | 2023-04-16T01:53:56 | ---
license: mit
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- Reasoning
- Multi-Step-Deductive-Reasoning
- Logical-Reasoning
size_categories:
- 100K<n<1M
---
# PARARULE-Plus
This is a branch which includes the dataset from PARARULE-Plus Depth=2, Depth=3, Depth=4 and Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` | 2,687 | [
[
-0.040679931640625,
-0.047637939453125,
0.031890869140625,
0.0159149169921875,
-0.0017957687377929688,
-0.007465362548828125,
-0.0100250244140625,
-0.034088134765625,
0.0033206939697265625,
0.042327880859375,
-0.035247802734375,
-0.036956787109375,
-0.0378417968... |
pphuc25/VLSP_T2 | 2023-07-13T04:42:10.000Z | [
"language:vi",
"region:us"
] | pphuc25 | null | null | 0 | 8 | 2023-04-17T01:22:07 | ---
language: vi
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 689551911.12
num_examples: 18843
download_size: 693488600
dataset_size: 689551911.12
---
# Dataset Card for "VLSP_T2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 408 | [
[
-0.01514434814453125,
0.0002911090850830078,
0.016357421875,
0.0201873779296875,
-0.0274658203125,
0.003662109375,
0.0264892578125,
-0.020965576171875,
0.04974365234375,
0.0362548828125,
-0.045379638671875,
-0.049102783203125,
-0.053192138671875,
-0.03369140... |
mstz/hypo | 2023-05-24T12:27:51.000Z | [
"task_categories:tabular-classification",
"language:en",
"hypo",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | 0 | 8 | 2023-04-17T13:28:18 | ---
language:
- en
tags:
- hypo
- tabular_classification
- binary_classification
pretty_name: Hypo
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- hypo
---
# Hypo
The Hypo dataset.
# Configurations and tasks
| **Configuration** | **Task** | **Description**|
|-----------------------|---------------------------|----------------|
| hypo | Multiclass classification.| What kind of hypothyroidism does the patient have? |
| has_hypo | Binary classification.| Does the patient hypothyroidism does the patient have? |
| 668 | [
[
-0.0244598388671875,
-0.01184844970703125,
0.0240631103515625,
0.005481719970703125,
-0.0170440673828125,
0.010650634765625,
-0.0039825439453125,
-0.00875091552734375,
0.037017822265625,
0.0277252197265625,
-0.04937744140625,
-0.043731689453125,
-0.0492858886718... |
mstz/pums | 2023-04-18T07:42:19.000Z | [
"task_categories:tabular-classification",
"language:en",
"pums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_us_census_data_(1990)_116,
author = {Meek,Meek, Thiesson,Thiesson & Heckerman,Heckerman},
title = {{US Census Data (1990)}},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5VP42}}
} | 0 | 8 | 2023-04-18T07:32:38 | ---
language:
- en
tags:
- pums
- tabular_classification
- binary_classification
- UCI
pretty_name: Ipums
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- pums
---
# Pums
The [Pums dataset](https://archive-beta.ics.uci.edu/dataset/116/us+census+data+1990) from the [UCI repository](https://archive-beta.ics.uci.edu/).
U.S.A. Census dataset, classify the income of the individual.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| pums | Binary classification.| | 659 | [
[
-0.015838623046875,
-0.0077972412109375,
-0.007266998291015625,
0.020782470703125,
-0.01337432861328125,
0.0206451416015625,
0.00432586669921875,
0.016754150390625,
0.026947021484375,
0.06756591796875,
-0.025177001953125,
-0.0404052734375,
-0.052093505859375,
... |
csaybar/CloudSEN12-high | 2023-10-04T18:24:35.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | csaybar | null | null | 0 | 8 | 2023-04-21T16:39:53 | ---
license: cc-by-nc-4.0
---
# **CloudSEN12 HIGH-QUALITY**
## **A Benchmark Dataset for Cloud Semantic Understanding**

CloudSEN12 is a LARGE dataset (~1 TB) for cloud semantic understanding that consists of 49,400 image patches (IP) that are
evenly spread throughout all continents except Antarctica. Each IP covers 5090 x 5090 meters and contains data from Sentinel-2
levels 1C and 2A, hand-crafted annotations of thick and thin clouds and cloud shadows, Sentinel-1 Synthetic Aperture Radar (SAR),
digital elevation model, surface water occurrence, land cover classes, and cloud mask results from six cutting-edge
cloud detection algorithms.
CloudSEN12 is designed to support both weakly and self-/semi-supervised learning strategies by including three distinct forms of
hand-crafted labeling data: high-quality, scribble and no-annotation. For more details on how we created the dataset see our
paper.
Ready to start using **[CloudSEN12](https://cloudsen12.github.io/)**?
**[Download Dataset](https://cloudsen12.github.io/download.html)**
**[Paper - Scientific Data](https://www.nature.com/articles/s41597-022-01878-2)**
**[Inference on a new S2 image](https://colab.research.google.com/github/cloudsen12/examples/blob/master/example02.ipynb)**
**[Enter to cloudApp](https://github.com/cloudsen12/CloudApp)**
**[CloudSEN12 in Google Earth Engine](https://gee-community-catalog.org/projects/cloudsen12/)**
<br>
### **General Description**
<br>
| File | Name | Scale | Wavelength | Description | Datatype |
|---------------|-----------------|--------|------------------------------|------------------------------------------------------------------------------------------------------|----------|
| L1C_ & L2A_ | B1 | 0.0001 | 443.9nm (S2A) / 442.3nm (S2B)| Aerosols. | np.int16 |
| | B2 | 0.0001 | 496.6nm (S2A) / 492.1nm (S2B)| Blue. | np.int16 |
| | B3 | 0.0001 | 560nm (S2A) / 559nm (S2B) | Green. | np.int16 |
| | B4 | 0.0001 | 664.5nm (S2A) / 665nm (S2B) | Red. | np.int16 |
| | B5 | 0.0001 | 703.9nm (S2A) / 703.8nm (S2B)| Red Edge 1. | np.int16 |
| | B6 | 0.0001 | 740.2nm (S2A) / 739.1nm (S2B)| Red Edge 2. | np.int16 |
| | B7 | 0.0001 | 782.5nm (S2A) / 779.7nm (S2B)| Red Edge 3. | np.int16 |
| | B8 | 0.0001 | 835.1nm (S2A) / 833nm (S2B) | NIR. | np.int16 |
| | B8A | 0.0001 | 864.8nm (S2A) / 864nm (S2B) | Red Edge 4. | np.int16 |
| | B9 | 0.0001 | 945nm (S2A) / 943.2nm (S2B) | Water vapor. | np.int16 |
| | B11 | 0.0001 | 1613.7nm (S2A) / 1610.4nm (S2B)| SWIR 1. | np.int16 |
| | B12 | 0.0001 | 2202.4nm (S2A) / 2185.7nm (S2B)| SWIR 2. | np.int16 |
| L1C_ | B10 | 0.0001 | 1373.5nm (S2A) / 1376.9nm (S2B)| Cirrus. | np.int16 |
| L2A_ | AOT | 0.001 | - | Aerosol Optical Thickness. | np.int16 |
| | WVP | 0.001 | - | Water Vapor Pressure. | np.int16 |
| | TCI_R | 1 | - | True Color Image, Red. | np.int16 |
| | TCI_G | 1 | - | True Color Image, Green. | np.int16 |
| | TCI_B | 1 | - | True Color Image, Blue. | np.int16 |
| S1_ | VV | 1 | 5.405GHz | Dual-band cross-polarization, vertical transmit/horizontal receive. |np.float32|
| | VH | 1 | 5.405GHz | Single co-polarization, vertical transmit/vertical receive. |np.float32|
| | angle | 1 | - | Incidence angle generated by interpolating the ‘incidenceAngle’ property. |np.float32|
| EXTRA_ | CDI | 0.0001 | - | Cloud Displacement Index. | np.int16 |
| | Shwdirection | 0.01 | - | Azimuth. Values range from 0°- 360°. | np.int16 |
| | elevation | 1 | - | Elevation in meters. Obtained from MERIT Hydro datasets. | np.int16 |
| | ocurrence | 1 | - | JRC Global Surface Water. The frequency with which water was present. | np.int16 |
| | LC100 | 1 | - | Copernicus land cover product. CGLS-LC100 Collection 3. | np.int16 |
| | LC10 | 1 | - | ESA WorldCover 10m v100 product. | np.int16 |
| LABEL_ | fmask | 1 | - | Fmask4.0 cloud masking. | np.int16 |
| | QA60 | 1 | - | SEN2 Level-1C cloud mask. | np.int8 |
| | s2cloudless | 1 | - | sen2cloudless results. | np.int8 |
| | sen2cor | 1 | - | Scene Classification band. Obtained from SEN2 level 2A. | np.int8 |
| | cd_fcnn_rgbi | 1 | - | López-Puigdollers et al. results based on RGBI bands. | np.int8 |
| |cd_fcnn_rgbi_swir| 1 | - | López-Puigdollers et al. results based on RGBISWIR bands. | np.int8 |
| | kappamask_L1C | 1 | - | KappaMask results using SEN2 level L1C as input. | np.int8 |
| | kappamask_L2A | 1 | - | KappaMask results using SEN2 level L2A as input. | np.int8 |
| | manual_hq | 1 | | High-quality pixel-wise manual annotation. | np.int8 |
| | manual_sc | 1 | | Scribble manual annotation. | np.int8 |
<br>
### **Label Description**
| **CloudSEN12** | **KappaMask** | **Sen2Cor** | **Fmask** | **s2cloudless** | **CD-FCNN** | **QA60** |
|------------------|------------------|-------------------------|-----------------|-----------------------|---------------------|--------------------|
| 0 Clear | 1 Clear | 4 Vegetation | 0 Clear land | 0 Clear | 0 Clear | 0 Clear |
| | | 2 Dark area pixels | 1 Clear water | | | |
| | | 5 Bare Soils | 3 Snow | | | |
| | | 6 Water | | | | |
| | | 11 Snow | | | | |
| 1 Thick cloud | 4 Cloud | 8 Cloud medium probability | 4 Cloud | 1 Cloud | 1 Cloud | 1024 Opaque cloud |
| | | 9 Cloud high probability | | | | |
| 2 Thin cloud | 3 Semi-transparent cloud | 10 Thin cirrus | | | | 2048 Cirrus cloud |
| 3 Cloud shadow | 2 Cloud shadow | 3 Cloud shadows | 2 Cloud shadow | | | |
<br>
<be>
# **Dataset information, working with np.memmap:**
Sentinel-1 and Sentinel-2 collect images that span an area of 5090 x 5090 meters at 10 meters per pixel.
This results in 509 x 509 pixel images, presenting a challenge.
**Given each layer is a two-dimensional matrix, true image data is held from pixel (1,1) to (509,509)**
The subsequent images have been padded with three pixels around the image to make the images 512 x 512, a size that most models accept.
To give a visual representation of where the padding has been added:
x marks blank pixels stored as black (255)
xxxxxxxxxxxxxx
x xx
x xx
x xx
x xx
x xx
xxxxxxxxxxxxxx
xxxxxxxxxxxxxx
The effects of the padding can be mitigated by adding a random crop within (1,1) to (509, 509)
or completing a center crop to the desired size for network architecture.
### The current split of image data is into three categories:
- Training: 84.90 % of total
- Validation: 5.35 % of total
- Testing: 9.75 % of total
For the recomposition of the data to take random samples of all 10,000 available images,
we can combine the np.memmap objects and take random selections at the beginning of each trial,
selecting random samples of the 10,000 images based on the desired percentage of the total data available.
This approach ensures the mitigation of training bias based on the original selection of images for each category.
<br>
### **Example**
**train shape: (8490, 512, 512)**
<br>
**val shape: (535, 512, 512)**
<br>
**test shape: (975, 512, 512)**
<br>
```py
import numpy as np
# Read high-quality train
train_shape = (8490, 512, 512)
B4X = np.memmap('train/L1C_B04.dat', dtype='int16', mode='r', shape=train_shape)
y = np.memmap('train/manual_hq.dat', dtype='int8', mode='r', shape=train_shape)
# Read high-quality val
val_shape = (535, 512, 512)
B4X = np.memmap('val/L1C_B04.dat', dtype='int16', mode='r', shape=val_shape)
y = np.memmap('val/manual_hq.dat', dtype='int8', mode='r', shape=val_shape)
# Read high-quality test
test_shape = (975, 512, 512)
B4X = np.memmap('test/L1C_B04.dat', dtype='int16', mode='r', shape=test_shape)
y = np.memmap('test/manual_hq.dat', dtype='int8', mode='r', shape=test_shape)
```
<br>
This work has been partially supported by the Spanish Ministry of Science and Innovation project
PID2019-109026RB-I00 (MINECO-ERDF) and the Austrian Space Applications Programme within the
**[SemantiX project](https://austria-in-space.at/en/projects/2019/semantix.php)**.
| 13,185 | [
[
-0.0653076171875,
-0.02154541015625,
0.041748046875,
0.018585205078125,
-0.0116729736328125,
-0.0252685546875,
0.0005249977111816406,
-0.0184783935546875,
0.022430419921875,
0.0251312255859375,
-0.040374755859375,
-0.049560546875,
-0.041107177734375,
-0.0075... |
h2oai/openassistant_oasst1_h2ogpt | 2023-04-24T18:07:44.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | 3 | 8 | 2023-04-21T21:02:50 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1_h2ogpt` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `48307`
- Number of columns: `3`
- Column names: `['input', 'prompt_type', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/83857fcf7d3b712aad5db32207e6db0ab0f780f9/create_data.py#L1252)
| 769 | [
[
-0.004459381103515625,
-0.044403076171875,
0.01043701171875,
0.00543212890625,
-0.00891876220703125,
-0.0062713623046875,
0.0082244873046875,
-0.01462554931640625,
-0.00421905517578125,
0.024017333984375,
-0.02020263671875,
-0.048095703125,
-0.0270233154296875,
... |
CM/codexglue_code2text_python | 2023-04-22T01:52:50.000Z | [
"region:us"
] | CM | null | null | 2 | 8 | 2023-04-22T01:52:12 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 813663148
num_examples: 251820
- name: validation
num_bytes: 46888564
num_examples: 13914
- name: test
num_bytes: 50659688
num_examples: 14918
download_size: 325303743
dataset_size: 911211400
---
# Dataset Card for "codexglue_code2text_python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 916 | [
[
-0.0228118896484375,
-0.01264190673828125,
0.0028514862060546875,
0.034698486328125,
-0.0014820098876953125,
-0.004741668701171875,
-0.007091522216796875,
-0.017730712890625,
0.035430908203125,
0.035980224609375,
-0.043853759765625,
-0.053497314453125,
-0.040649... |
iamketan25/roleplay-instructions-dataset | 2023-04-24T22:32:40.000Z | [
"region:us"
] | iamketan25 | null | null | 10 | 8 | 2023-04-24T22:32:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
alxfgh/PubChem10M_SELFIES | 2023-05-06T19:05:49.000Z | [
"size_categories:1M<n<10M",
"source_datasets:PubChem10M",
"chemistry",
"molecules",
"selfies",
"smiles",
"region:us"
] | alxfgh | null | null | 0 | 8 | 2023-04-29T16:19:35 | ---
pretty_name: PubChem10M_GroupSelfies
size_categories:
- 1M<n<10M
source_datasets:
- PubChem10M
tags:
- chemistry
- molecules
- selfies
- smiles
---
<a href="https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip">PubChem10M</a> dataset by DeepChem encoded to SELFIES using <a href="https://github.com/aspuru-guzik-group/group-selfies">group-selfies</a>. | 379 | [
[
-0.030242919921875,
-0.030364990234375,
0.035888671875,
0.02532958984375,
-0.033294677734375,
0.0148468017578125,
0.0135040283203125,
-0.008392333984375,
0.053070068359375,
0.038848876953125,
-0.06640625,
-0.06884765625,
-0.031768798828125,
0.022003173828125... |
emozilla/quality-pruned-llama-gptneox-4k | 2023-04-30T03:32:55.000Z | [
"region:us"
] | emozilla | null | null | 1 | 8 | 2023-04-30T03:32:48 | ---
dataset_info:
features:
- name: article
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: int64
- name: hard
dtype: bool
splits:
- name: validation
num_bytes: 10848419.183125598
num_examples: 442
- name: train
num_bytes: 11288834.9385652
num_examples: 455
download_size: 578723
dataset_size: 22137254.1216908
---
# Dataset Card for "quality-pruned-llama-gptneox-4k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.0272369384765625,
-0.0014638900756835938,
0.0224456787109375,
0.0245819091796875,
-0.05804443359375,
0.005779266357421875,
0.0296783447265625,
-0.003543853759765625,
0.060760498046875,
0.0477294921875,
-0.052947998046875,
-0.057373046875,
-0.04217529296875,
... |
makiour/dvoice-Darija | 2023-05-06T22:37:09.000Z | [
"region:us"
] | makiour | null | null | 1 | 8 | 2023-05-06T16:06:28 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ar:
- 100K<n<1M
en:
- 1M<n<10M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 11.0
language_bcp47:
- ar
- en | 379 | [
[
-0.0285186767578125,
-0.01507568359375,
0.00957489013671875,
0.039947509765625,
-0.0215301513671875,
0.0030918121337890625,
-0.025634765625,
-0.03729248046875,
0.0286865234375,
0.04437255859375,
-0.045013427734375,
-0.03656005859375,
-0.051025390625,
0.03146... |
dhmeltzer/ELI5_embedded | 2023-05-17T20:11:53.000Z | [
"region:us"
] | dhmeltzer | null | null | 0 | 8 | 2023-05-17T20:10:25 | ---
dataset_info:
features:
- name: q_id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: document
dtype: string
- name: subreddit
dtype: string
- name: answers
sequence:
- name: a_id
dtype: string
- name: text
dtype: string
- name: score
dtype: int32
- name: title_urls
sequence:
- name: url
dtype: string
- name: selftext_urls
sequence:
- name: url
dtype: string
- name: answers_urls
sequence:
- name: url
dtype: string
- name: split
dtype: string
- name: title_body
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 2375028302
num_examples: 558669
download_size: 2134837293
dataset_size: 2375028302
---
# Dataset Card for "ELI5_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 989 | [
[
-0.0533447265625,
-0.03594970703125,
0.02044677734375,
0.00811004638671875,
-0.0200347900390625,
-0.00885009765625,
0.0211639404296875,
-0.0252838134765625,
0.0517578125,
0.0245819091796875,
-0.0540771484375,
-0.0499267578125,
-0.0304412841796875,
-0.0199890... |
joey234/mmlu-astronomy | 2023-08-23T04:28:04.000Z | [
"region:us"
] | joey234 | null | null | 0 | 8 | 2023-05-19T04:30:20 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5110
num_examples: 5
- name: test
num_bytes: 764857
num_examples: 152
download_size: 95332
dataset_size: 769967
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-astronomy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 950 | [
[
-0.04034423828125,
-0.0142364501953125,
0.0237884521484375,
0.00598907470703125,
-0.007450103759765625,
-0.0055084228515625,
0.0250091552734375,
-0.01538848876953125,
0.0712890625,
0.027008056640625,
-0.07012939453125,
-0.044830322265625,
-0.04107666015625,
... |
bleugreen/typescript-instruct | 2023-05-24T00:29:09.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"typescript",
"instruct",
"code",
"region:us"
] | bleugreen | null | null | 3 | 8 | 2023-05-24T00:11:16 | ---
task_categories:
- text-classification
- text2text-generation
- summarization
language:
- en
tags:
- typescript
- instruct
- code
size_categories:
- 10K<n<100K
---
# typescript-instruct
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
ClassDeclaration - 2401
ArrowFunction - 16443
MethodDeclaration - 12096
FunctionDeclaration - 3226
TypeAliasDeclaration - 1489
InterfaceDeclaration - 5240
EnumDeclaration - 214
```
- Leading comments are added to the front of `content`
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions w/ `gpt-3.5-turbo`
- Ran into of OpenAI API for the month, will finish other half next month
# Dataset Structure
```python
from datasets import load_dataset
load_dataset("bleugreen/typescript-instruct")
DatasetDict({
train: Dataset({
features: ['type', 'content', 'repo', 'path', 'language', 'instruction'],
num_rows: 41109
})
})
``` | 2,076 | [
[
-0.0148468017578125,
-0.0171356201171875,
0.028289794921875,
0.006603240966796875,
-0.0170745849609375,
0.005458831787109375,
-0.0208892822265625,
-0.006458282470703125,
0.0249786376953125,
0.063720703125,
-0.042633056640625,
-0.046844482421875,
-0.0399780273437... |
Abzu/wizard | 2023-06-04T19:35:21.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | Abzu | null | null | 0 | 8 | 2023-05-25T08:58:06 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 85659801.65210004
num_examples: 49263
- name: test
num_bytes: 9518335.347899958
num_examples: 5474
download_size: 50310834
dataset_size: 95178137
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
---
# Dataset Card for "wizard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 543 | [
[
-0.0455322265625,
-0.02130126953125,
-0.00475311279296875,
0.01488494873046875,
0.01552581787109375,
0.0007305145263671875,
0.0215911865234375,
-0.0021762847900390625,
0.053192138671875,
0.043731689453125,
-0.06732177734375,
-0.040252685546875,
-0.03857421875,
... |
Meduzka/ukr_psyops_false_news | 2023-05-27T17:54:05.000Z | [
"region:us"
] | Meduzka | null | null | 0 | 8 | 2023-05-27T15:04:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Thaweewat/pobpad | 2023-05-28T06:16:24.000Z | [
"region:us"
] | Thaweewat | null | null | 0 | 8 | 2023-05-28T06:06:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tasksource/parade | 2023-05-31T08:20:40.000Z | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"language:en",
"region:us"
] | tasksource | null | null | 0 | 8 | 2023-05-30T07:42:30 | ---
task_categories:
- sentence-similarity
- text-classification
language:
- en
---
https://github.com/heyunh2015/PARADE_dataset
```
@inproceedings{he-etal-2020-parade,
title = "{PARADE}: {A} {N}ew {D}ataset for {P}araphrase {I}dentification {R}equiring {C}omputer {S}cience {D}omain {K}nowledge",
author = "He, Yun and
Wang, Zhuoer and
Zhang, Yin and
Huang, Ruihong and
Caverlee, James",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.611",
doi = "10.18653/v1/2020.emnlp-main.611",
pages = "7572--7582",
abstract = "We present a new benchmark dataset called PARADE for paraphrase identification that requires specialized domain knowledge. PARADE contains paraphrases that overlap very little at the lexical and syntactic level but are semantically equivalent based on computer science domain knowledge, as well as non-paraphrases that overlap greatly at the lexical and syntactic level but are not semantically equivalent based on this domain knowledge. Experiments show that both state-of-the-art neural models and non-expert human annotators have poor performance on PARADE. For example, BERT after fine-tuning achieves an F1 score of 0.709, which is much lower than its performance on other paraphrase identification datasets. PARADE can serve as a resource for researchers interested in testing models that incorporate domain knowledge. We make our data and code freely available.",
}
``` | 1,683 | [
[
-0.030242919921875,
-0.04241943359375,
0.0207061767578125,
0.033905029296875,
-0.022796630859375,
0.024749755859375,
-0.017364501953125,
-0.03265380859375,
0.0262451171875,
0.03289794921875,
-0.0284271240234375,
-0.04815673828125,
-0.038665771484375,
0.00258... |
TrainingDataPro/low_quality_webcam_video_attacks | 2023-09-14T16:48:24.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset includes live-recorded Anti-Spoofing videos from around the world,
captured via low-quality webcams with resolutions like QVGA, QQVGA and QCIF. | @InProceedings{huggingface:dataset,
title = {low_quality_webcam_video_attacks},
author = {TrainingDataPro},
year = {2023}
} | 1 | 8 | 2023-05-30T08:48:08 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
- legal
- code
---
# Low Quality Live Attacks
The dataset includes live-recorded Anti-Spoofing videos from around the world, captured via **low-quality** webcams with resolutions like QVGA, QQVGA and QCIF.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=low_quality_webcam_video_attacks) to discuss your requirements, learn about the price and buy the dataset.

# Webcam Resolution
The collection of different video resolutions is provided, like:
- QVGA (320p x 240p),
- QQVGA (120p x 160p),
- QCIF (176p x 144p) and others.
# Metadata
Each attack instance is accompanied by the following details:
- Unique attack identifier
- Identifier of the user recording the attack
- User's age
- User's gender
- User's country of origin
- Attack resolution
Additionally, the model of the webcam is also specified.
Metadata is represented in the `file_info.csv`.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=low_quality_webcam_video_attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: https://www.kaggle.com/trainingdatapro/datasets
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 1,696 | [
[
-0.028839111328125,
-0.06256103515625,
-0.0160675048828125,
-0.0006546974182128906,
-0.032989501953125,
0.01495361328125,
0.0169830322265625,
-0.035888671875,
0.038482666015625,
0.028900146484375,
-0.061309814453125,
-0.02886962890625,
-0.05096435546875,
-0.... |
player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored-trained-on-Based | 2023-06-06T21:53:55.000Z | [
"region:us"
] | player1537 | null | null | 0 | 8 | 2023-06-06T21:31:13 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 1512752
num_examples: 120
download_size: 323831
dataset_size: 1512752
---
# Dataset Card for "Bloom-560m-trained-on-Wizard-Vicuna-Uncensored-trained-on-Based"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 440 | [
[
-0.0273895263671875,
-0.0101470947265625,
0.00579071044921875,
0.03179931640625,
-0.003612518310546875,
-0.007472991943359375,
0.00838470458984375,
-0.01190948486328125,
0.040374755859375,
0.0543212890625,
-0.0528564453125,
-0.050201416015625,
-0.035186767578125... |
RJKiseki/TCGA | 2023-06-07T03:40:18.000Z | [
"region:us"
] | RJKiseki | null | null | 0 | 8 | 2023-06-07T02:40:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cmu-lti/cobracorpus | 2023-06-26T17:20:21.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"arxiv:2306.01985",
"arxiv:2203.09509",
"region:us"
] | cmu-lti | null | null | 0 | 8 | 2023-06-08T02:12:47 | ---
license: openrail
task_categories:
- text-generation
- text-classification
language:
- en
pretty_name: COBRA🐍
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage: https://cobra.xuhuiz.com/**
- **Paper: https://arxiv.org/abs/2306.01985**
### Dataset Summary
This dataset contains COBRACOPURS and COBRACORPUS-counterfactual in this [paper](https://arxiv.org/abs/2306.01985)
### Data Splits
* `advContexts_explanations.csv` is `COBRACorpus-CF`
* `toxigen_explanations.csv` is the full `COBRACorpus`
* `toxigen_explanations_train.csv` is the training split of `COBRACorpus`
* `toxigen_explanations_val.csv` is the validation split of `COBRACorpus`
### Data Entries
For `COBRACorpus`, the relevant entries in the `csv` files are
*`situationalContext (string)`, `speakerIdentity (string)`, `listenerIdentity (string)`, `statement (string)`,
`intent (string)`, `targetGroup (string)`, `relevantPowerDynamics (string)`, `implication (string)`,
`targetGroupEmotionalReaction (string)`, `targetGroupCognitiveReaction (string)`, `offensiveness (string)`*
Please refer to the [paper](https://arxiv.org/abs/2306.01985) for the specific explanations of these entries.
The *`examples`* entry is the few-shot prompt that we used to generate explanations.
All other entries are from the [Toxicgen](https://arxiv.org/abs/2203.09509) dataset, which is not directly relevant to this
work but we leave them there as the metadata in case it's useful for the future works.
### Citation Information
If you find this dataset useful, please cite:
```
@inproceedings{zhou2023cobra,
title = {COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements},
author = {Zhou, Xuhui and Zhu, Hao and Yerukola, Akhila and Davidson, Thomas and D. Hwang, Jena and Swayamdipta, Swabha and Sap, Maarten},
year = {2023},
booktitle = {Findings of ACL}
}
``` | 1,918 | [
[
-0.018585205078125,
-0.045806884765625,
0.000476837158203125,
0.0046539306640625,
-0.023590087890625,
-0.0155029296875,
-0.0100555419921875,
-0.01702880859375,
0.01100921630859375,
0.0178680419921875,
-0.0518798828125,
-0.04290771484375,
-0.041259765625,
0.0... |
Defalt-404/Bittensor_validator | 2023-06-09T06:27:22.000Z | [
"region:us"
] | Defalt-404 | null | null | 0 | 8 | 2023-06-09T06:13:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
julianzy/GPABenchmark | 2023-06-13T05:21:59.000Z | [
"region:us"
] | julianzy | null | null | 1 | 8 | 2023-06-13T05:09:53 | The official repository of paper: "Check Me If You Can: Detecting ChatGPT-Generated Academic Writing using CheckGPT". | 117 | [
[
-0.0123748779296875,
-0.054229736328125,
0.0243682861328125,
0.010955810546875,
0.002422332763671875,
0.0165863037109375,
-0.00330352783203125,
-0.0333251953125,
0.0298614501953125,
0.05377197265625,
-0.060882568359375,
-0.037567138671875,
-0.0362548828125,
... |
yezhengli9/wmt20-en-ru | 2023-06-15T00:49:04.000Z | [
"region:us"
] | yezhengli9 | null | null | 0 | 8 | 2023-06-15T00:44:58 | ---
dataset_info:
features:
- name: id (string)
dtype: string
- name: translation (translation)
dtype: string
splits:
- name: train
num_bytes: 1803167
num_examples: 2002
download_size: 693889
dataset_size: 1803167
---
# Dataset Card for "wmt20-en-ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 413 | [
[
-0.052978515625,
-0.01525115966796875,
0.02606201171875,
0.0209197998046875,
-0.0248870849609375,
0.00356292724609375,
0.01467132568359375,
-0.026519775390625,
0.05462646484375,
0.0240936279296875,
-0.07513427734375,
-0.0511474609375,
-0.046173095703125,
-0.... |
Norod78/freepik-sticker-collection-blip2-captions-512 | 2023-06-21T08:39:28.000Z | [
"language:en",
"region:us"
] | Norod78 | null | null | 0 | 8 | 2023-06-21T08:35:12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 91812539
num_examples: 364
download_size: 91811364
dataset_size: 91812539
language:
- en
---
# Dataset Card for "freepik-sticker-collection-blip2-captions-512"
Stickers from the first 10 pages in [Freepik's sticker collection](https://www.freepik.com/free-photos-vectors/sticker-collection) resized and cropped to 512x512.
Captions were auto-generated using blip2.
| 513 | [
[
-0.0438232421875,
0.0178680419921875,
0.00202178955078125,
0.0117340087890625,
-0.04443359375,
0.00988006591796875,
0.00884246826171875,
-0.0163726806640625,
0.045684814453125,
0.038909912109375,
-0.056365966796875,
-0.015472412109375,
-0.04107666015625,
-0.... |
yurakuratov/example_promoters_2k | 2023-07-06T08:38:16.000Z | [
"region:us"
] | yurakuratov | null | null | 0 | 8 | 2023-06-27T16:37:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BAAI/COIG-PC-Lite | 2023-09-26T08:51:45.000Z | [
"language:zh",
"license:unknown",
"region:us"
] | BAAI | null | null | 21 | 8 | 2023-06-28T02:56:01 | ---
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: |
北京智源人工智能研究院(以下简称“我们”或“研究院”)通过BAAI DataHub(data.baai.ac.cn)和COIG-PC HuggingFace仓库(https://huggingface.co/datasets/BAAI/COIG-PC)向您提供开源数据集(以下或称“数据集”),您可通过下载的方式获取您所需的开源数据集,并在遵守各原始数据集使用规则前提下,基于学习、研究、商业等目的使用相关数据集。
在您获取(包括但不限于访问、下载、复制、传播、使用等处理数据集的行为)开源数据集前,您应认真阅读并理解本《COIG-PC开源数据集使用须知与免责声明》(以下简称“本声明”)。一旦您获取开源数据集,无论您的获取方式为何,您的获取行为均将被视为对本声明全部内容的认可。
1. 平台的所有权与运营权
您应充分了解并知悉,BAAI DataHub和COIG-PC HuggingFace仓库(包括当前版本及全部历史版本)的所有权与运营权归智源人工智能研究院所有,智源人工智能研究院对本平台/本工具及开源数据集开放计划拥有最终解释权和决定权。
您知悉并理解,基于相关法律法规更新和完善以及我们需履行法律合规义务的客观变化,我们保留对本平台/本工具进行不定时更新、维护,或者中止乃至永久终止提供本平台/本工具服务的权利。我们将在合理时间内将可能发生前述情形通过公告或邮件等合理方式告知您,您应当及时做好相应的调整和安排,但我们不因发生前述任何情形对您造成的任何损失承担任何责任。
2. 开源数据集的权利主张
为了便于您基于学习、研究、商业的目的开展数据集获取、使用等活动,我们对第三方原始数据集进行了必要的格式整合、数据清洗、标注、分类、注释等相关处理环节,形成可供本平台/本工具用户使用的开源数据集。
您知悉并理解,我们不对开源数据集主张知识产权中的相关财产性权利,因此我们亦无相应义务对开源数据集可能存在的知识产权进行主动识别和保护,但这不意味着我们放弃开源数据集主张署名权、发表权、修改权和保护作品完整权(如有)等人身性权利。而原始数据集可能存在的知识产权及相应合法权益由原权利人享有。
此外,向您开放和使用经合理编排、加工和处理后的开源数据集,并不意味着我们对原始数据集知识产权、信息内容等真实、准确或无争议的认可,您应当自行筛选、仔细甄别,使用经您选择的开源数据集。您知悉并同意,研究院对您自行选择使用的原始数据集不负有任何无缺陷或无瑕疵的承诺义务或担保责任。
3. 开源数据集的使用限制
您使用数据集不得侵害我们或任何第三方的合法权益(包括但不限于著作权、专利权、商标权等知识产权与其他权益)。
获取开源数据集后,您应确保对开源数据集的使用不超过原始数据集的权利人以公示或协议等形式明确规定的使用规则,包括原始数据的使用范围、目的和合法用途等。我们在此善意地提请您留意,如您对开源数据集的使用超出原始数据集的原定使用范围及用途,您可能面临侵犯原始数据集权利人的合法权益例如知识产权的风险,并可能承担相应的法律责任。
4. 个人信息保护
基于技术限制及开源数据集的公益性质等客观原因,我们无法保证开源数据集中不包含任何个人信息,我们不对开源数据集中可能涉及的个人信息承担任何法律责任。
如开源数据集涉及个人信息,我们不对您使用开源数据集可能涉及的任何个人信息处理行为承担法律责任。我们在此善意地提请您留意,您应依据《个人信息保护法》等相关法律法规的规定处理个人信息。
为了维护信息主体的合法权益、履行可能适用的法律、行政法规的规定,如您在使用开源数据集的过程中发现涉及或者可能涉及个人信息的内容,应立即停止对数据集中涉及个人信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
5. 信息内容管理
我们不对开源数据集可能涉及的违法与不良信息承担任何法律责任。
如您在使用开源数据集的过程中发现开源数据集涉及或者可能涉及任何违法与不良信息,您应立即停止对数据集中涉及违法与不良信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
6. 投诉与通知
如您认为开源数据集侵犯了您的合法权益,您可通过010-50955974联系我们,我们会及时依法处理您的主张与投诉。
为了处理您的主张和投诉,我们可能需要您提供联系方式、侵权证明材料以及身份证明等材料。请注意,如果您恶意投诉或陈述失实,您将承担由此造成的全部法律责任(包括但不限于合理的费用赔偿等)。
7. 责任声明
您理解并同意,基于开源数据集的性质,数据集中可能包含来自不同来源和贡献者的数据,其真实性、准确性、客观性等可能会有所差异,我们无法对任何数据集的可用性、可靠性等做出任何承诺。
在任何情况下,我们不对开源数据集可能存在的个人信息侵权、违法与不良信息传播、知识产权侵权等任何风险承担任何法律责任。
在任何情况下,我们不对您因开源数据集遭受的或与之相关的任何损失(包括但不限于直接损失、间接损失以及可得利益损失等)承担任何法律责任。
8. 其他
开源数据集处于不断发展、变化的阶段,我们可能因业务发展、第三方合作、法律法规变动等原因更新、调整所提供的开源数据集范围,或中止、暂停、终止开源数据集提供业务。
extra_gated_fields:
Name: text
Affiliation: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
extra_gated_button_content: "Acknowledge license"
license: unknown
language:
- zh
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
- split: Top50PerTask
path: data/Top50PerTask-*
- split: Top100PerTask
path: data/Top100PerTask-*
- split: Top200PerTask
path: data/Top200PerTask-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: split
dtype: string
- name: task_name_in_eng
dtype: string
- name: task_type
struct:
- name: major
sequence: string
- name: minor
sequence: string
- name: domain
sequence: string
- name: other
dtype: string
- name: filename
dtype: string
splits:
- name: full
num_bytes: 1099400407
num_examples: 650147
- name: train
num_bytes: 410204689
num_examples: 216691
- name: valid
num_bytes: 12413560
num_examples: 16148
- name: test
num_bytes: 51472090
num_examples: 69301
- name: Top50PerTask
num_bytes: 14763925
num_examples: 19274
- name: Top100PerTask
num_bytes: 28489139
num_examples: 37701
- name: Top200PerTask
num_bytes: 51472090
num_examples: 69301
download_size: 53939740
dataset_size: 1668215900
---
# COIG Prompt Collection
## License
**Default Licensing for Sub-Datasets Without Specific License Declaration**: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default.
**Precedence of Declared Licensing for Sub-Datasets**: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the declared license shall take precedence and govern the usage of that particular sub-dataset.
Users and developers utilizing the COIG-PC Dataset must ensure compliance with the licensing terms as outlined above. It is imperative to review and adhere to the specified licensing conditions of each sub-dataset, as they may vary.
## What is COIG-PC?
The COIG-PC Dataset is a meticulously curated and comprehensive collection of Chinese tasks and data, designed to facilitate the fine-tuning and optimization of language models for Chinese natural language processing (NLP). The dataset aims to provide researchers and developers with a rich set of resources to improve the capabilities of language models in handling Chinese text, which can be utilized in various fields such as text generation, information extraction, sentiment analysis, machine translation, among others.
COIG-PC-Lite is a subset of COIG-PC with only 200 samples from each task file. If you are looking for COIG-PC, please refer to https://huggingface.co/datasets/BAAI/COIG-PC.
## Why COIG-PC?
The COIG-PC Dataset is an invaluable resource for the domain of natural language processing (NLP) for various compelling reasons:
**Addressing Language Complexity**: Chinese is known for its intricacy, with a vast array of characters and diverse grammatical structures. A specialized dataset like COIG-PC, which is tailored for the Chinese language, is essential to adequately address these complexities during model training.
**Comprehensive Data Aggregation**: The COIG-PC Dataset is a result of an extensive effort in integrating almost all available Chinese datasets in the market. This comprehensive aggregation makes it one of the most exhaustive collections for Chinese NLP.
**Data Deduplication and Normalization**: The COIG-PC Dataset underwent rigorous manual processing to eliminate duplicate data and perform normalization. This ensures that the dataset is free from redundancy, and the data is consistent and well-structured, making it more user-friendly and efficient for model training.
**Fine-tuning and Optimization**: The dataset’s instruction-based phrasing facilitates better fine-tuning and optimization of language models. This structure allows models to better understand and execute tasks, which is particularly beneficial in improving performance on unseen or novel tasks.
The COIG-PC Dataset, with its comprehensive aggregation, meticulous selection, deduplication, and normalization of data, stands as an unmatched resource for training and optimizing language models tailored for the Chinese language and culture. It addresses the unique challenges of Chinese language processing and serves as a catalyst for advancements in Chinese NLP.
## Who builds COIG-PC?
The bedrock of COIG-PC is anchored in the dataset furnished by stardust.ai, which comprises an aggregation of data collected from the Internet.
And COIG-PC is the result of a collaborative effort involving engineers and experts from over twenty distinguished universities both domestically and internationally. Due to space constraints, it is not feasible to list all of them; however, the following are a few notable institutions among the collaborators:
- Beijing Academy of Artificial Intelligence, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/baai.png" alt= “BAAI” height="100" width="150">
- Peking University, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/pku.png" alt= “PKU” height="100" width="200">
- The Hong Kong University of Science and Technology (HKUST), China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/hkust.png" alt= “HKUST” height="100" width="200">
- The University of Waterloo, Canada
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/waterloo.png" alt= “Waterloo” height="100" width="150">
- The University of Sheffield, United Kingdom
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/sheffield.png" alt= “Sheffield” height="100" width="200">
- Beijing University of Posts and Telecommunications, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/bupt.png" alt= “BUPT” height="100" width="200">
- [Multimodal Art Projection](https://huggingface.co/m-a-p)
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/map.png" alt= “M.A.P” height="100" width="200">
- stardust.ai, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/stardust.png" alt= “stardust.ai” height="100" width="200">
- LinkSoul.AI, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/linksoul.png" alt= “linksoul.ai” height="100" width="200">
For the detailed list of engineers involved in the creation and refinement of COIG-PC, please refer to the paper that will be published subsequently. This paper will provide in-depth information regarding the contributions and the specifics of the dataset’s development process.
## How to use COIG-PC?
COIG-PC is structured in a **.jsonl** file format. Each line in the file represents a single data record and is structured in JSON (JavaScript Object Notation) format. Below is a breakdown of the elements within each line:
**instruction**: This is a text string that provides the instruction for the task. For example, it might tell the model what to do with the input data.
**input**: This is the input data that the model needs to process. In the context of translation, it would be the text that needs to be translated.
**output**: This contains the expected output data after processing the input. In the context of translation, it would be the translated text.
**split**: Indicates the official split of the original dataset, which is used to categorize data for different phases of model training and evaluation. It can be 'train', 'test', 'valid', etc.
**task_type**: Contains major and minor categories for the dataset. Major categories are broader, while minor categories can be more specific subcategories.
**domain**: Indicates the domain or field to which the data belongs.
**other**: This field can contain additional information or metadata regarding the data record. If there is no additional information, it may be set to null.
### Example
Here is an example of how a line in the COIG-PC dataset might be structured:
```
{
"instruction": "请把下面的中文句子翻译成英文",
"input": "我爱你。",
"output": "I love you.",
"split": "train",
"task_type": {
"major": ["翻译"],
"minor": ["翻译", "中译英"]
},
"domain": ["通用"],
"other": null
}
```
In this example:
**instruction** tells the model to translate the following Chinese sentence into English.
**input** contains the Chinese text "我爱你" which means "I love you".
**output** contains the expected translation in English: "I love you".
**split** indicates that this data record is part of the training set.
**task_type** specifies that the major category is "Translation" and the minor categories are "Translation" and "Chinese to English".
**domain** specifies that this data record belongs to the general domain.
**other** is set to null as there is no additional information for this data record.
## Update: Aug. 30, 2023
- v1.2: Delete 31 bad task files. Update 99 task files. Rename 2 task files. Add 3 new task files. COIG-PC now has 3339 tasks in total.
- v1.1: Fix 00040-001-000 and 00050-003-000, ignore 00930 and 01373.
- v1.0: First version for arXiv paper.
- v0.6: Upload 28 new tasks. COIG-PC now has 3367 tasks in total.
- v0.5: Upload 202 new tasks. COIG-PC now has 3339 tasks in total.
- v0.4: Upload 1049 new tasks. COIG-PC now has 3137 tasks in total.
- v0.3: Upload 1139 new tasks. COIG-PC now has 2088 tasks in total.
- v0.2: Upload 422 new tasks. COIG-PC now has 949 tasks in total. Add "TopSamplenumPerTask" split where only "Samplenum" samples are used from each task.
- v0.1: Upload 527 tasks.
## COIG-PC Citation
If you want to cite COIG-PC dataset, you could use this:
```
```
## Contact Us
To contact us feel free to create an Issue in this repository.
| 13,059 | [
[
-0.035552978515625,
-0.05218505859375,
-0.005512237548828125,
0.02471923828125,
-0.018951416015625,
-0.01085662841796875,
-0.0209503173828125,
-0.04290771484375,
0.01548004150390625,
0.0181732177734375,
-0.061279296875,
-0.040283203125,
-0.0232086181640625,
... |
DataHammer/scimrc | 2023-06-28T12:00:41.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | DataHammer | null | null | 4 | 8 | 2023-06-28T06:15:50 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Scientific Emotional Dialogue
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a dataset for question answering on scientific research papers. It consists of 21.297 questions-answer-evidence pairs.
### Supported Tasks and Leaderboards
- question-answering: The dataset can be used to train a model for Scientific Question Answering. Success on this task is typically measured by achieving a high F1 score.
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
"question": "What aim do the authors have by improving Wiki(GOLD) results?",
"answer": "The aim is not to tune their model specifically on this class hierarchy. They instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
"evidence": "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)\nIt is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
"yes_no": false
}
```
| 2,481 | [
[
-0.032806396484375,
-0.0518798828125,
0.005153656005859375,
0.005619049072265625,
-0.0031337738037109375,
0.0032176971435546875,
-0.01239776611328125,
-0.036376953125,
0.00609588623046875,
0.0191192626953125,
-0.055206298828125,
-0.057647705078125,
-0.0314636230... |
Norod78/il-license-plates | 2023-06-28T14:11:25.000Z | [
"task_categories:object-detection",
"size_categories:n<1K",
"license:mit",
"region:us"
] | Norod78 | null | null | 0 | 8 | 2023-06-28T13:36:19 | ---
license: mit
size_categories:
- n<1K
task_categories:
- object-detection
---
Images of Israeli License Plates with annotation for Plate-Object detection | 157 | [
[
-0.06494140625,
-0.01885986328125,
-0.006153106689453125,
0.028472900390625,
-0.0648193359375,
-0.050567626953125,
0.0196380615234375,
-0.04742431640625,
0.01654052734375,
0.0546875,
-0.007236480712890625,
-0.0355224609375,
-0.020355224609375,
-0.00922393798... |
anzorq/kbd_speech | 2023-10-08T18:12:13.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:kbd",
"region:us"
] | anzorq | null | null | 1 | 8 | 2023-06-28T15:45:25 | ---
language:
- kbd
task_categories:
- automatic-speech-recognition
- text-to-speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: gender
dtype: string
- name: country
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 193658385.11
num_examples: 20555
download_size: 518811329
dataset_size: 193658385.11
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kbd_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 683 | [
[
-0.03424072265625,
-0.039215087890625,
0.0099639892578125,
0.0185699462890625,
-0.018890380859375,
0.006732940673828125,
-0.0108642578125,
-0.0090789794921875,
0.05987548828125,
0.03326416015625,
-0.05755615234375,
-0.06256103515625,
-0.0447998046875,
-0.041... |
aisyahhrazak/ms-news-selangorkini | 2023-06-28T18:29:39.000Z | [
"region:us"
] | aisyahhrazak | null | null | 0 | 8 | 2023-06-28T18:03:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rdpahalavan/network-packet-flow-header-payload | 2023-07-22T21:40:27.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"region:us"
] | rdpahalavan | null | null | 2 | 8 | 2023-07-01T12:20:03 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
size_categories:
- 1M<n<10M
---
Each row contains the information of a network packet and its label. The format is given below:
 | 386 | [
[
-0.035003662109375,
-0.055267333984375,
-0.0135955810546875,
0.033599853515625,
-0.0153045654296875,
-0.005039215087890625,
0.03753662109375,
-0.0006766319274902344,
0.0904541015625,
0.08111572265625,
-0.035064697265625,
-0.022979736328125,
-0.030517578125,
... |
hamel/tabular-data-test | 2023-07-05T20:44:37.000Z | [
"region:us"
] | hamel | null | null | 0 | 8 | 2023-07-05T20:30:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pie/squad_v2 | 2023-09-28T18:37:32.000Z | [
"region:us"
] | pie | null | null | 0 | 8 | 2023-07-10T11:32:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BigSuperbPrivate/SpeechTextMatching_LibrispeechTrainClean360 | 2023-07-10T14:53:02.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-10T13:37:34 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 24960872147.768
num_examples: 104014
- name: validation
num_bytes: 348628035.844
num_examples: 2703
download_size: 23576168585
dataset_size: 25309500183.612003
---
# Dataset Card for "speechTextMatching_LibrispeechTrainClean360"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 665 | [
[
-0.03631591796875,
-0.0110626220703125,
0.00579833984375,
0.028472900390625,
0.00385284423828125,
0.0009260177612304688,
-0.01201629638671875,
-0.018524169921875,
0.052581787109375,
0.0389404296875,
-0.06768798828125,
-0.0546875,
-0.038360595703125,
-0.02557... |
BigSuperbPrivate/SpeechTextMatching_LibrispeechTrainClean100 | 2023-07-10T15:17:51.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-10T15:00:33 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 6378650249.671
num_examples: 28539
- name: validation
num_bytes: 348628035.844
num_examples: 2703
download_size: 6779588288
dataset_size: 6727278285.514999
---
# Dataset Card for "speechTextMatching_LibrispeechTrainClean100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 661 | [
[
-0.0350341796875,
-0.0178680419921875,
0.010162353515625,
0.0269622802734375,
0.0033359527587890625,
-0.0005974769592285156,
-0.015869140625,
-0.01010894775390625,
0.058563232421875,
0.0361328125,
-0.0635986328125,
-0.054534912109375,
-0.037200927734375,
-0.... |
BigSuperbPrivate/SpokenTermDetection_LibrispeechTrainClean100 | 2023-07-12T16:07:39.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-10T15:37:51 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 6373730811.671
num_examples: 28539
- name: validation
num_bytes: 348367644.844
num_examples: 2703
download_size: 6775627104
dataset_size: 6722098456.514999
---
# Dataset Card for "speechTermDetection_LibrispeechTrainClean100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 662 | [
[
-0.0457763671875,
-0.02520751953125,
-0.005588531494140625,
0.0157928466796875,
-0.003082275390625,
0.0060577392578125,
-0.01154327392578125,
-0.0081787109375,
0.059234619140625,
0.0350341796875,
-0.062164306640625,
-0.052734375,
-0.04547119140625,
-0.032623... |
BigSuperbPrivate/NoiseSNRLevelPredictionGaussian_VoxcelebMusan | 2023-07-12T19:14:56.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-11T04:49:59 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 7723989946.0
num_examples: 60000
- name: validation
num_bytes: 1679326573.0
num_examples: 13045
- name: test
num_bytes: 3137224477.0
num_examples: 24370
download_size: 12519826695
dataset_size: 12540540996.0
---
# Dataset Card for "NoiseSNRLevelPredictiongaussian_VoxcelebMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 650 | [
[
-0.043365478515625,
-0.0224151611328125,
0.0095062255859375,
0.0297393798828125,
-0.008209228515625,
0.0009012222290039062,
0.0038661956787109375,
-0.0125274658203125,
0.04083251953125,
0.0207061767578125,
-0.06695556640625,
-0.06689453125,
-0.033294677734375,
... |
FredZhang7/malicious-website-features-2.4M | 2023-08-14T05:21:51.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"language:no",
"language:af",
"language:en",
"language:et",
"language:sw",
"language:sv",
"language:sq",
"language:de",
"language:ca",
"languag... | FredZhang7 | null | null | 2 | 8 | 2023-07-11T05:15:27 | ---
license: apache-2.0
task_categories:
- text-classification
- feature-extraction
- tabular-classification
language:
- 'no'
- af
- en
- et
- sw
- sv
- sq
- de
- ca
- hu
- da
- tl
- so
- fi
- fr
- cs
- hr
- cy
- es
- sl
- tr
- pl
- pt
- nl
- id
- sk
- lt
- lv
- vi
- it
- ro
- ru
- mk
- bg
- th
- ja
- ko
- multilingual
size_categories:
- 1M<n<10M
---
**Important Notice:**
- A subset of the URL dataset is from Kaggle, and the Kaggle datasets contained 10%-15% mislabelled data. See [this dicussion I opened](https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset/discussion/431505) for some false positives. I have contacted Kaggle regarding their erroneous "Usability" score calculation for these unreliable datasets.
- The feature extraction methods shown here are not robust at all in 2023, and there're even silly mistakes in 3 functions: `not_indexed_by_google`, `domain_registration_length`, and `age_of_domain`.
<br>
The *features* dataset is original, and my feature extraction method is covered in [feature_extraction.py](./feature_extraction.py).
To extract features from a website, simply passed the URL and label to `collect_data()`. The features are saved to `phishing_detection_dataset.csv` locally by default.
In the *features* dataset, there're 911,180 websites online at the time of data collection. The plots below show the regression line and correlation coefficients of 22+ features extracted and whether the URL is malicious.
If we could plot the lifespan of URLs, we could see that the oldest website has been online since Nov 7th, 2008, while the most recent phishing websites appeared as late as July 10th, 2023.
## Malicious URL Categories
- Defacement
- Malware
- Phishing
## Data Analysis
Here are two images showing the correlation coefficient and correlation of determination between predictor values and the target value `is_malicious`.


Let's exmain the correlations one by one and cross out any unreasonable or insignificant correlations.
| Variable | Justification for Crossing Out |
|-----------------------------|------------------------------------- |
| ~~redirects~~ | contracdicts previous research (as redirects increase, is_malicious tends to decrease by a little) |
| ~~not_indexed_by_google~~ | 0.00 correlation |
| ~~email_submission~~ | contracdicts previous research |
| request_url_percentage | |
| issuer | |
| certificate_age | |
| ~~url_anchor_percentage~~ | contracdicts previous research |
| ~~meta_percentage~~ | 0.00 correlation |
| script_percentage | |
| link_percentage | |
| ~~mouseover_changes~~ | contracdicts previous research & 0.00 correlation |
| ~~right_clicked_disabled~~ | contracdicts previous research & 0.00 correlation |
| ~~popup_window_has_text_field~~ | contracdicts previous research |
| ~~use_iframe~~ | contracdicts previous research |
| ~~has_suspicious_ports~~ | contracdicts previous research |
| ~~external_favicons~~ | contracdicts previous research |
| TTL (Time to Live) | |
| ip_address_count | |
| ~~TXT_record~~ | all websites had a TXT record |
| ~~check_sfh~~ | contracdicts previous research |
| count_domain_occurrences | |
| domain_registration_length | |
| abnormal_url | |
| age_of_domain | |
| page_rank_decimal | |
## Pre-training Ideas
For training, I split the classification task into two stages in anticipation of the limited availability of online phishing websites due to their short lifespan, as well as the possibility that research done on phishing is not up-to-date:
1. a small multilingual BERT model to output the confidence level of a URL being malicious to model #2, by finetuning on 2,436,727 legitimate and malicious URLs
2. (probably) LightGBM to analyze the confidence level, along with roughly 10 extracted features
This way, I can make the most out of the limited phishing websites avaliable.
## Source of the URLs
- https://moz.com/top500
- https://phishtank.org/phish_search.php?valid=y&active=y&Search=Search
- https://www.kaggle.com/datasets/siddharthkumar25/malicious-and-benign-urls
- https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset
- https://github.com/ESDAUNG/PhishDataset
- https://github.com/JPCERTCC/phishurl-list
- https://github.com/Dogino/Discord-Phishing-URLs
## Reference
- https://www.kaggle.com/datasets/akashkr/phishing-website-dataset
- https://www.kaggle.com/datasets/shashwatwork/web-page-phishing-detection-dataset
- https://www.kaggle.com/datasets/aman9d/phishing-data
## Side notes
- Cloudflare offers an [API for phishing URL scanning](https://developers.cloudflare.com/api/operations/phishing-url-information-get-results-for-a-url-scan), with a generous global rate limit of 1200 requests every 5 minutes. | 5,183 | [
[
-0.01910400390625,
-0.0694580078125,
-0.00420379638671875,
0.02294921875,
-0.01116943359375,
-0.01479339599609375,
-0.0011119842529296875,
-0.052154541015625,
0.01580810546875,
0.013519287109375,
-0.028533935546875,
-0.04962158203125,
-0.01467132568359375,
-... |
DynamicSuperb/EnvironmentalSoundClassification_ESC50-NaturalSoundscapesAndWaterSounds | 2023-07-12T06:04:47.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 8 | 2023-07-11T11:31:47 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 176516291.0
num_examples: 400
download_size: 168105448
dataset_size: 176516291.0
---
# Dataset Card for "environmental_sound_classification_natural_soundscapes_and_water_sounds_ESC50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.05731201171875,
-0.01157379150390625,
0.016510009765625,
0.027587890625,
-0.011260986328125,
0.006084442138671875,
-0.013671875,
-0.0308074951171875,
0.0384521484375,
0.033447265625,
-0.06475830078125,
-0.0665283203125,
-0.029296875,
-0.0113067626953125,
... |
DynamicSuperb/EnvironmentalSoundClassification_ESC50-InteriorAndDomesticSounds | 2023-07-12T06:00:55.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 8 | 2023-07-11T11:51:31 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 176531272.0
num_examples: 400
download_size: 139037473
dataset_size: 176531272.0
---
# Dataset Card for "environmental_sound_classification_interior_and_domestic_sounds_ESC50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.053955078125,
-0.0022678375244140625,
0.021942138671875,
0.020782470703125,
0.010040283203125,
0.0029239654541015625,
-0.020751953125,
-0.0128326416015625,
0.028717041015625,
0.018829345703125,
-0.0606689453125,
-0.07672119140625,
-0.021575927734375,
-0.0... |
BigSuperbPrivate/SpeechDetection_Tedlium2Train | 2023-07-16T18:57:44.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-11T16:57:59 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 15158178294.006
num_examples: 92973
- name: validation
num_bytes: 117089199.0
num_examples: 507
download_size: 15267681440
dataset_size: 15275267493.006
---
# Dataset Card for "speechDetection_TEDLIUM2Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.02203369140625,
-0.02716064453125,
0.0084686279296875,
0.009033203125,
-0.005252838134765625,
0.00910186767578125,
-0.007152557373046875,
-0.0146026611328125,
0.040771484375,
0.0171966552734375,
-0.06512451171875,
-0.04376220703125,
-0.036346435546875,
-0... |
BigSuperbPrivate/SpeechDetection_LibrispeechTrainClean100 | 2023-07-17T07:24:09.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-11T17:00:08 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6521891356.935
num_examples: 28539
- name: validation
num_bytes: 349517035.018
num_examples: 2703
download_size: 6769766359
dataset_size: 6871408391.953
---
# Dataset Card for "speechDetection_LibrispeechTrainClean100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 580 | [
[
-0.041900634765625,
-0.02459716796875,
0.00980377197265625,
0.0165252685546875,
0.003124237060546875,
0.0009431838989257812,
-0.002460479736328125,
-0.0151214599609375,
0.057098388671875,
0.0318603515625,
-0.058868408203125,
-0.050750732421875,
-0.04296875,
... |
BigSuperbPrivate/SpoofDetection_ASVspoof2015 | 2023-07-31T11:25:17.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-11T17:05:31 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1775127458.5
num_examples: 16375
- name: validation
num_bytes: 5335909314.584
num_examples: 53372
download_size: 7189452793
dataset_size: 7111036773.084
---
# Dataset Card for "SpoofDetection_ASVspoof2015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 567 | [
[
-0.0301666259765625,
-0.0246124267578125,
0.003570556640625,
0.03546142578125,
-0.01132965087890625,
0.0003287792205810547,
0.032806396484375,
-0.0225372314453125,
0.0635986328125,
0.04022216796875,
-0.06414794921875,
-0.039642333984375,
-0.0479736328125,
-0... |
BigSuperbPrivate/SpoofDetection_Asvspoof2017 | 2023-07-31T11:26:24.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-13T03:06:27 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 270119397.248
num_examples: 3014
- name: validation
num_bytes: 169603853.98
num_examples: 1710
download_size: 415434782
dataset_size: 439723251.22800004
---
# Dataset Card for "SpoofDetection_ASVspoof2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 567 | [
[
-0.025543212890625,
-0.02325439453125,
0.003173828125,
0.0313720703125,
-0.0141754150390625,
0.00018095970153808594,
0.0295867919921875,
-0.021026611328125,
0.0635986328125,
0.040374755859375,
-0.0635986328125,
-0.038604736328125,
-0.04791259765625,
-0.01693... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.